id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions
107914493,MDEwOlJlcG9zaXRvcnkxMDc5MTQ0OTM=,datasette,simonw/datasette,0,9599,https://github.com/simonw/datasette,An open source multi-tool for exploring and publishing data,0,2017-10-23T00:39:03Z,2022-11-15T23:16:27Z,2022-11-16T03:47:14Z,https://datasette.io,5770,6628,6628,Python,1,0,1,1,0,463,0,0,435,apache-2.0,"[""asgi"", ""automatic-api"", ""csv"", ""datasets"", ""datasette"", ""datasette-io"", ""docker"", ""json"", ""python"", ""sql"", ""sqlite""]",463,435,6628,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,463,97,,,1,public,0,,0,1
110509816,MDEwOlJlcG9zaXRvcnkxMTA1MDk4MTY=,csvs-to-sqlite,simonw/csvs-to-sqlite,0,9599,https://github.com/simonw/csvs-to-sqlite,Convert CSV files into a SQLite database,0,2017-11-13T06:38:21Z,2021-11-18T16:33:39Z,2021-11-18T16:35:33Z,,138,655,655,Python,1,1,1,1,0,50,0,0,34,apache-2.0,"[""click"", ""csv"", ""datasette"", ""datasette-io"", ""datasette-tool"", ""pandas"", ""python"", ""sqlite""]",50,34,655,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,50,17,"# csvs-to-sqlite
[](https://pypi.org/project/csvs-to-sqlite/)
[](https://github.com/simonw/csvs-to-sqlite/releases)
[](https://github.com/simonw/csvs-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/csvs-to-sqlite/blob/main/LICENSE)
Convert CSV files into a SQLite database. Browse and publish that SQLite database with [Datasette](https://github.com/simonw/datasette).
Basic usage:
csvs-to-sqlite myfile.csv mydatabase.db
This will create a new SQLite database called `mydatabase.db` containing a
single table, `myfile`, containing the CSV content.
You can provide multiple CSV files:
csvs-to-sqlite one.csv two.csv bundle.db
The `bundle.db` database will contain two tables, `one` and `two`.
This means you can use wildcards:
csvs-to-sqlite ~/Downloads/*.csv my-downloads.db
If you pass a path to one or more directories, the script will recursively
search those directories for CSV files and create tables for each one.
csvs-to-sqlite ~/path/to/directory all-my-csvs.db
## Handling TSV (tab-separated values)
You can use the `-s` option to specify a different delimiter. If you want
to use a tab character you'll need to apply shell escaping like so:
csvs-to-sqlite my-file.tsv my-file.db -s $'\t'
## Refactoring columns into separate lookup tables
Let's say you have a CSV file that looks like this:
county,precinct,office,district,party,candidate,votes
Clark,1,President,,REP,John R. Kasich,5
Clark,2,President,,REP,John R. Kasich,0
Clark,3,President,,REP,John R. Kasich,7
([Real example taken from the Open Elections project](https://github.com/openelections/openelections-data-sd/blob/master/2016/20160607__sd__primary__clark__precinct.csv))
You can now convert selected columns into separate lookup tables using the new
`--extract-column` option (shortname: `-c`) - for example:
csvs-to-sqlite openelections-data-*/*.csv \
-c county:County:name \
-c precinct:Precinct:name \
-c office -c district -c party -c candidate \
openelections.db
The format is as follows:
column_name:optional_table_name:optional_table_value_column_name
If you just specify the column name e.g. `-c office`, the following table will
be created:
CREATE TABLE ""office"" (
""id"" INTEGER PRIMARY KEY,
""value"" TEXT
);
If you specify all three options, e.g. `-c precinct:Precinct:name` the table
will look like this:
CREATE TABLE ""Precinct"" (
""id"" INTEGER PRIMARY KEY,
""name"" TEXT
);
The original tables will be created like this:
CREATE TABLE ""ca__primary__san_francisco__precinct"" (
""county"" INTEGER,
""precinct"" INTEGER,
""office"" INTEGER,
""district"" INTEGER,
""party"" INTEGER,
""candidate"" INTEGER,
""votes"" INTEGER,
FOREIGN KEY (county) REFERENCES County(id),
FOREIGN KEY (party) REFERENCES party(id),
FOREIGN KEY (precinct) REFERENCES Precinct(id),
FOREIGN KEY (office) REFERENCES office(id),
FOREIGN KEY (candidate) REFERENCES candidate(id)
);
They will be populated with IDs that reference the new derived tables.
## Installation
$ pip install csvs-to-sqlite
`csvs-to-sqlite` now requires Python 3. If you are running Python 2 you can install the last version to support Python 2:
$ pip install csvs-to-sqlite==0.9.2
## csvs-to-sqlite --help
```
Usage: csvs-to-sqlite [OPTIONS] PATHS... DBNAME
PATHS: paths to individual .csv files or to directories containing .csvs
DBNAME: name of the SQLite database file to create
Options:
-s, --separator TEXT Field separator in input .csv
-q, --quoting INTEGER Control field quoting behavior per csv.QUOTE_*
constants. Use one of QUOTE_MINIMAL (0),
QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
--skip-errors Skip lines with too many fields instead of
stopping the import
--replace-tables Replace tables if they already exist
-t, --table TEXT Table to use (instead of using CSV filename)
-c, --extract-column TEXT One or more columns to 'extract' into a
separate lookup table. If you pass a simple
column name that column will be replaced with
integer foreign key references to a new table
of that name. You can customize the name of
the table like so: state:States:state_name
This will pull unique values from the 'state'
column and use them to populate a new 'States'
table, with an id column primary key and a
state_name column containing the strings from
the original column.
-d, --date TEXT One or more columns to parse into ISO
formatted dates
-dt, --datetime TEXT One or more columns to parse into ISO
formatted datetimes
-df, --datetime-format TEXT One or more custom date format strings to try
when parsing dates/datetimes
-pk, --primary-key TEXT One or more columns to use as the primary key
-f, --fts TEXT One or more columns to use to populate a full-
text index
-i, --index TEXT Add index on this column (or a compound index
with -i col1,col2)
--shape TEXT Custom shape for the DB table - format is
csvcol:dbcol(TYPE),...
--filename-column TEXT Add a column with this name and populate with
CSV file name
--fixed-column ... Populate column with a fixed string
--fixed-column-int ...
Populate column with a fixed integer
--fixed-column-float ...
Populate column with a fixed float
--no-index-fks Skip adding index to foreign key columns
created using --extract-column (default is to
add them)
--no-fulltext-fks Skip adding full-text index on values
extracted using --extract-column (default is
to add them)
--just-strings Import all columns as text strings by default
(and, if specified, still obey --shape,
--date/datetime, and --datetime-format)
--version Show the version and exit.
--help Show this message and exit.
```
","
csvs-to-sqlite
Convert CSV files into a SQLite database. Browse and publish that SQLite database with Datasette.
Basic usage:
csvs-to-sqlite myfile.csv mydatabase.db
This will create a new SQLite database called mydatabase.db containing a
single table, myfile, containing the CSV content.
You can provide multiple CSV files:
csvs-to-sqlite one.csv two.csv bundle.db
The bundle.db database will contain two tables, one and two.
This means you can use wildcards:
csvs-to-sqlite ~/Downloads/*.csv my-downloads.db
If you pass a path to one or more directories, the script will recursively
search those directories for CSV files and create tables for each one.
csvs-to-sqlite ~/path/to/directory all-my-csvs.db
Handling TSV (tab-separated values)
You can use the -s option to specify a different delimiter. If you want
to use a tab character you'll need to apply shell escaping like so:
csvs-to-sqlite my-file.tsv my-file.db -s $'\t'
Refactoring columns into separate lookup tables
Let's say you have a CSV file that looks like this:
county,precinct,office,district,party,candidate,votes
Clark,1,President,,REP,John R. Kasich,5
Clark,2,President,,REP,John R. Kasich,0
Clark,3,President,,REP,John R. Kasich,7
They will be populated with IDs that reference the new derived tables.
Installation
$ pip install csvs-to-sqlite
csvs-to-sqlite now requires Python 3. If you are running Python 2 you can install the last version to support Python 2:
$ pip install csvs-to-sqlite==0.9.2
csvs-to-sqlite --help
Usage: csvs-to-sqlite [OPTIONS] PATHS... DBNAME
PATHS: paths to individual .csv files or to directories containing .csvs
DBNAME: name of the SQLite database file to create
Options:
-s, --separator TEXT Field separator in input .csv
-q, --quoting INTEGER Control field quoting behavior per csv.QUOTE_*
constants. Use one of QUOTE_MINIMAL (0),
QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or
QUOTE_NONE (3).
--skip-errors Skip lines with too many fields instead of
stopping the import
--replace-tables Replace tables if they already exist
-t, --table TEXT Table to use (instead of using CSV filename)
-c, --extract-column TEXT One or more columns to 'extract' into a
separate lookup table. If you pass a simple
column name that column will be replaced with
integer foreign key references to a new table
of that name. You can customize the name of
the table like so: state:States:state_name
This will pull unique values from the 'state'
column and use them to populate a new 'States'
table, with an id column primary key and a
state_name column containing the strings from
the original column.
-d, --date TEXT One or more columns to parse into ISO
formatted dates
-dt, --datetime TEXT One or more columns to parse into ISO
formatted datetimes
-df, --datetime-format TEXT One or more custom date format strings to try
when parsing dates/datetimes
-pk, --primary-key TEXT One or more columns to use as the primary key
-f, --fts TEXT One or more columns to use to populate a full-
text index
-i, --index TEXT Add index on this column (or a compound index
with -i col1,col2)
--shape TEXT Custom shape for the DB table - format is
csvcol:dbcol(TYPE),...
--filename-column TEXT Add a column with this name and populate with
CSV file name
--fixed-column <TEXT TEXT>... Populate column with a fixed string
--fixed-column-int <TEXT INTEGER>...
Populate column with a fixed integer
--fixed-column-float <TEXT FLOAT>...
Populate column with a fixed float
--no-index-fks Skip adding index to foreign key columns
created using --extract-column (default is to
add them)
--no-fulltext-fks Skip adding full-text index on values
extracted using --extract-column (default is
to add them)
--just-strings Import all columns as text strings by default
(and, if specified, still obey --shape,
--date/datetime, and --datetime-format)
--version Show the version and exit.
--help Show this message and exit.
Run datasette install datasette-cluster-map to add this plugin to your Datasette virtual environment. Datasette will automatically load the plugin if it is installed in this way.
If you are deploying using the datasette publish command you can use the --install option:
If any of your tables have a latitude and longitude column, a map will be automatically displayed.
Configuration
If your columns are called something else you can configure the column names using plugin configuration in a metadata.json file. For example, if all of your columns are called xlat and xlng you can create a metadata.json file like this:
{
""title"": ""Regular metadata keys can go here too"",
""plugins"": {
""datasette-cluster-map"": {
""latitude_column"": ""xlat"",
""longitude_column"": ""xlng""
}
}
}
Then run Datasette like this:
datasette mydata.db -m metadata.json
This will configure the required column names for every database loaded by that Datasette instance.
If you want to customize the column names for just one table in one database, you can do something like this:
You can also use a custom SQL query to rename those columns to latitude and longitude, for example:
select*,
""Capture Latitude""as latitude,
""Capture Longitude""as longitude
from [USGS_WC_eartag_deployments_2009-2011]
The map defaults to being displayed above the main results table on the page. You can use the ""container"" plugin setting to provide a CSS selector indicating an element that the map should be appended to instead.
Custom tile layers
You can customize the tile layer used by the maps using the tile_layer and tile_layer_options configuration settings. For example, to use the Stamen Watercolor tiles you can use these settings:
The marker popup defaults to displaying the data for the underlying database row.
You can customize this by including a popup column in your results containing JSON that defines a more useful popup.
The JSON in the popup column should look something like this:
{
""image"": ""https://niche-museums.imgix.net/dodgems.heic?w=800&h=400&fit=crop"",
""alt"": ""Dingles Fairground Heritage Centre"",
""title"": ""Dingles Fairground Heritage Centre"",
""description"": ""Home of the National Fairground Collection, Dingles has over 45,000 indoor square feet of vintage fairground rides... and you can go on them! Highlights include the last complete surviving and opera"",
""link"": ""/browse/museums/26""
}
Each of these columns is optional.
title is the title to show at the top of the popup
image is the URL to an image to display in the popup
alt is the alt attribute to use for that image
description is a longer string of text to use as a description
link is a URL that the marker content should link to
You can use the SQLite json_object() function to construct this data dynamically as part of your SQL query. Here's an example:
select json_object(
'image', photo_url ||'?w=800&h=400&fit=crop',
'title', name,
'description', substr(description, 0, 200),
'link', '/browse/museums/'|| id
) as popup,
latitude, longitude from museums
where id in (26, 27) order by id
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-cluster-map
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
135007287,MDEwOlJlcG9zaXRvcnkxMzUwMDcyODc=,datasette-leaflet-geojson,simonw/datasette-leaflet-geojson,0,9599,https://github.com/simonw/datasette-leaflet-geojson,Datasette plugin that replaces any GeoJSON column values with a Leaflet map.,0,2018-05-27T01:42:30Z,2022-08-26T23:27:11Z,2022-08-26T23:27:08Z,,91,9,9,Python,1,1,1,1,0,4,0,0,3,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""gis"", ""leaflet""]",4,3,9,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,4,2,"# datasette-leaflet-geojson
[](https://pypi.org/project/datasette-leaflet-geojson/)
[](https://github.com/simonw/datasette-leaflet-geojson/releases)
[](https://github.com/simonw/datasette-leaflet-geojson/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-leaflet-geojson/blob/main/LICENSE)
Datasette plugin that replaces any GeoJSON column values with a Leaflet map
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-leaflet-geojson
## Usage
Any columns containing valid GeoJSON strings will have their contents replaced with a Leaflet map when they are displayed in the Datasette interface.
## Demo
You can try this plugin out at https://calands.datasettes.com/calands/superunits_with_maps

## Configuration
By default this plugin displays maps for the first ten rows, and shows a ""Click to load map"" prompt for rows past the first ten.
You can change this limit using the `default_maps_to_load` plugin configuration setting. Add this to your `metadata.json`:
```json
{
""plugins"": {
""datasette-leaflet-geojson"": {
""default_maps_to_load"": 20
}
}
}
```
Then run Datasette with `datasette mydb.db -m metadata.json`.
","
datasette-leaflet-geojson
Datasette plugin that replaces any GeoJSON column values with a Leaflet map
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-leaflet-geojson
Usage
Any columns containing valid GeoJSON strings will have their contents replaced with a Leaflet map when they are displayed in the Datasette interface.
Then run Datasette with datasette mydb.db -m metadata.json.
",1,public,0,,0,
138669673,MDEwOlJlcG9zaXRvcnkxMzg2Njk2NzM=,datasette-vega,simonw/datasette-vega,0,9599,https://github.com/simonw/datasette-vega,Datasette plugin for visualizing data using Vega,0,2018-06-26T01:40:54Z,2021-12-10T22:20:46Z,2021-12-10T22:20:43Z,,59,42,42,JavaScript,1,1,1,1,0,2,0,0,31,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""plugin"", ""react"", ""vega""]",2,31,42,master,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,2,2,"# datasette-vega
[](https://pypi.org/project/datasette-vega/)
[](https://github.com/simonw/datasette-vega/blob/master/LICENSE)
A [Datasette](https://github.com/simonw/datasette) plugin that provides tools
for generating charts using [Vega](https://vega.github.io/).

Try out the latest master build as a live demo at https://datasette-vega-latest.datasette.io/ or try the latest release installed as a plugin at https://fivethirtyeight.datasettes.com/
To add this to your Datasette installation, install the plugin like so:
pip install datasette-vega
The plugin will then add itself to every Datasette table view.
If you are publishing data using the `datasette publish` command, you can
include this plugin like so:
datasette publish now mydatabase.db --install=datasette-vega
","
datasette-vega
A Datasette plugin that provides tools
for generating charts using Vega.
To add this to your Datasette installation, install the plugin like so:
pip install datasette-vega
The plugin will then add itself to every Datasette table view.
If you are publishing data using the datasette publish command, you can
include this plugin like so:
datasette publish now mydatabase.db --install=datasette-vega
",1,public,0,,,
140912432,MDEwOlJlcG9zaXRvcnkxNDA5MTI0MzI=,sqlite-utils,simonw/sqlite-utils,0,9599,https://github.com/simonw/sqlite-utils,Python CLI utility and library for manipulating SQLite databases,0,2018-07-14T03:21:46Z,2022-11-15T18:12:16Z,2022-11-15T15:53:38Z,https://sqlite-utils.datasette.io,1437,1029,1029,Python,1,1,1,1,0,79,0,0,72,apache-2.0,"[""cli"", ""click"", ""datasette"", ""datasette-io"", ""datasette-tool"", ""python"", ""sqlite"", ""sqlite-database""]",79,72,1029,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,79,16,"# sqlite-utils
[](https://pypi.org/project/sqlite-utils/)
[](https://sqlite-utils.datasette.io/en/stable/changelog.html)
[](https://pypi.org/project/sqlite-utils/)
[](https://github.com/simonw/sqlite-utils/actions?query=workflow%3ATest)
[](http://sqlite-utils.datasette.io/en/stable/?badge=stable)
[](https://codecov.io/gh/simonw/sqlite-utils)
[](https://github.com/simonw/sqlite-utils/blob/main/LICENSE)
[](https://discord.gg/Ass7bCAMDw)
Python CLI utility and library for manipulating SQLite databases.
## Some feature highlights
- [Pipe JSON](https://sqlite-utils.datasette.io/en/stable/cli.html#inserting-json-data) (or [CSV or TSV](https://sqlite-utils.datasette.io/en/stable/cli.html#inserting-csv-or-tsv-data)) directly into a new SQLite database file, automatically creating a table with the appropriate schema
- [Run in-memory SQL queries](https://sqlite-utils.datasette.io/en/stable/cli.html#querying-data-directly-using-an-in-memory-database), including joins, directly against data in CSV, TSV or JSON files and view the results
- [Configure SQLite full-text search](https://sqlite-utils.datasette.io/en/stable/cli.html#configuring-full-text-search) against your database tables and run search queries against them, ordered by relevance
- Run [transformations against your tables](https://sqlite-utils.datasette.io/en/stable/cli.html#transforming-tables) to make schema changes that SQLite `ALTER TABLE` does not directly support, such as changing the type of a column
- [Extract columns](https://sqlite-utils.datasette.io/en/stable/cli.html#extracting-columns-into-a-separate-table) into separate tables to better normalize your existing data
Read more on my blog, in this series of posts on [New features in sqlite-utils](https://simonwillison.net/series/sqlite-utils-features/) and other [entries tagged sqliteutils](https://simonwillison.net/tags/sqliteutils/).
## Installation
pip install sqlite-utils
Or if you use [Homebrew](https://brew.sh/) for macOS:
brew install sqlite-utils
## Using as a CLI tool
Now you can do things with the CLI utility like this:
$ sqlite-utils memory dogs.csv ""select * from t""
[{""id"": 1, ""age"": 4, ""name"": ""Cleo""},
{""id"": 2, ""age"": 2, ""name"": ""Pancakes""}]
$ sqlite-utils insert dogs.db dogs dogs.csv --csv
[####################################] 100%
$ sqlite-utils tables dogs.db --counts
[{""table"": ""dogs"", ""count"": 2}]
$ sqlite-utils dogs.db ""select id, name from dogs""
[{""id"": 1, ""name"": ""Cleo""},
{""id"": 2, ""name"": ""Pancakes""}]
$ sqlite-utils dogs.db ""select * from dogs"" --csv
id,age,name
1,4,Cleo
2,2,Pancakes
$ sqlite-utils dogs.db ""select * from dogs"" --table
id age name
---- ----- --------
1 4 Cleo
2 2 Pancakes
You can import JSON data into a new database table like this:
$ curl https://api.github.com/repos/simonw/sqlite-utils/releases \
| sqlite-utils insert releases.db releases - --pk id
Or for data in a CSV file:
$ sqlite-utils insert dogs.db dogs dogs.csv --csv
`sqlite-utils memory` lets you import CSV or JSON data into an in-memory database and run SQL queries against it in a single command:
$ cat dogs.csv | sqlite-utils memory - ""select name, age from stdin""
See the [full CLI documentation](https://sqlite-utils.datasette.io/en/stable/cli.html) for comprehensive coverage of many more commands.
## Using as a library
You can also `import sqlite_utils` and use it as a Python library like this:
```python
import sqlite_utils
db = sqlite_utils.Database(""demo_database.db"")
# This line creates a ""dogs"" table if one does not already exist:
db[""dogs""].insert_all([
{""id"": 1, ""age"": 4, ""name"": ""Cleo""},
{""id"": 2, ""age"": 2, ""name"": ""Pancakes""}
], pk=""id"")
```
Check out the [full library documentation](https://sqlite-utils.datasette.io/en/stable/python-api.html) for everything else you can do with the Python library.
## Related projects
* [Datasette](https://datasette.io/): A tool for exploring and publishing data
* [csvs-to-sqlite](https://github.com/simonw/csvs-to-sqlite): Convert CSV files into a SQLite database
* [db-to-sqlite](https://github.com/simonw/db-to-sqlite): CLI tool for exporting a MySQL or PostgreSQL database as a SQLite file
* [dogsheep](https://dogsheep.github.io/): A family of tools for personal analytics, built on top of `sqlite-utils`
","
sqlite-utils
Python CLI utility and library for manipulating SQLite databases.
Some feature highlights
Pipe JSON (or CSV or TSV) directly into a new SQLite database file, automatically creating a table with the appropriate schema
Run in-memory SQL queries, including joins, directly against data in CSV, TSV or JSON files and view the results
Run transformations against your tables to make schema changes that SQLite ALTER TABLE does not directly support, such as changing the type of a column
Extract columns into separate tables to better normalize your existing data
You can also import sqlite_utils and use it as a Python library like this:
importsqlite_utilsdb=sqlite_utils.Database(""demo_database.db"")
# This line creates a ""dogs"" table if one does not already exist:db[""dogs""].insert_all([
{""id"": 1, ""age"": 4, ""name"": ""Cleo""},
{""id"": 2, ""age"": 2, ""name"": ""Pancakes""}
], pk=""id"")
Datasette: A tool for exploring and publishing data
csvs-to-sqlite: Convert CSV files into a SQLite database
db-to-sqlite: CLI tool for exporting a MySQL or PostgreSQL database as a SQLite file
dogsheep: A family of tools for personal analytics, built on top of sqlite-utils
",1,public,0,,0,0
142967347,MDEwOlJlcG9zaXRvcnkxNDI5NjczNDc=,datasette-json-html,simonw/datasette-json-html,0,9599,https://github.com/simonw/datasette-json-html,Datasette plugin for rendering HTML based on JSON values,0,2018-07-31T05:41:39Z,2022-03-15T04:54:15Z,2022-03-22T01:43:59Z,,46,19,19,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""plugin""]",1,0,19,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,1,4,"# datasette-json-html
[](https://pypi.org/project/datasette-json-html/)
[](https://github.com/simonw/datasette-json-html/releases)
[](https://github.com/simonw/datasette-remote-metadata/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-json-html/blob/main/LICENSE)
Datasette plugin for rendering HTML based on JSON values, using the [render_cell plugin hook](https://docs.datasette.io/en/stable/plugin_hooks.html#render-cell-value-column-table-database-datasette).
This plugin looks for cell values that match a very specific JSON format and converts them into HTML when they are rendered by the Datasette interface.
## Links
{
""href"": ""https://simonwillison.net/"",
""label"": ""Simon Willison""
}
Will be rendered as an `` link:
Simon Willison
You can set a tooltip on the link using a `""title""` key:
{
""href"": ""https://simonwillison.net/"",
""label"": ""Simon Willison"",
""title"": ""My blog""
}
Produces:
Simon Willison
You can also include a description, which will be displayed below the link. If descriptions include newlines they will be converted to ` ` elements:
select json_object(
""href"", ""https://simonwillison.net/"",
""label"", ""Simon Willison"",
""description"", ""This can contain"" || x'0a' || ""newlines""
)
Produces:
Simon Willison This can contain newlines
* [Literal JSON link demo](https://datasette-json-html.datasette.io/demo?sql=select+%27%7B%0D%0A++++%22href%22%3A+%22https%3A%2F%2Fsimonwillison.net%2F%22%2C%0D%0A++++%22label%22%3A+%22Simon+Willison%22%2C%0D%0A++++%22title%22%3A+%22My+blog%22%0D%0A%7D%27)
## List of links
[
{
""href"": ""https://simonwillison.net/"",
""label"": ""Simon Willison""
},
{
""href"": ""https://github.com/simonw/datasette"",
""label"": ""Datasette""
}
]
Will be rendered as a comma-separated list of `` links:
Simon Willison,
Datasette
The `href` property must begin with `https://` or `http://` or `/`, to avoid potential XSS injection attacks (for example URLs that begin with `javascript:`).
Lists of links cannot include `""description""` keys.
* [Literal list of links demo](https://datasette-json-html.datasette.io/demo?sql=select+%27%5B%0D%0A++++%7B%0D%0A++++++++%22href%22%3A+%22https%3A%2F%2Fsimonwillison.net%2F%22%2C%0D%0A++++++++%22label%22%3A+%22Simon+Willison%22%0D%0A++++%7D%2C%0D%0A++++%7B%0D%0A++++++++%22href%22%3A+%22https%3A%2F%2Fgithub.com%2Fsimonw%2Fdatasette%22%2C%0D%0A++++++++%22label%22%3A+%22Datasette%22%0D%0A++++%7D%0D%0A%5D%27)
## Images
The image tag is more complex. The most basic version looks like this:
{
""img_src"": ""https://placekitten.com/200/300""
}
This will render as:
But you can also include one or more of `alt`, `caption`, `width` and `href`.
If you include width or alt, they will be added as attributes:
{
""img_src"": ""https://placekitten.com/200/300"",
""alt"": ""Kitten"",
""width"": 200
}
Produces:
* [Literal image demo](https://datasette-json-html.datasette.io/demo?sql=select+%27%7B%0D%0A++++%22img_src%22%3A+%22https%3A%2F%2Fplacekitten.com%2F200%2F300%22%2C%0D%0A++++%22alt%22%3A+%22Kitten%22%2C%0D%0A++++%22width%22%3A+200%0D%0A%7D%27)
The `href` key will cause the image to be wrapped in a link:
{
""img_src"": ""https://placekitten.com/200/300"",
""href"": ""http://www.example.com""
}
Produces:
The `caption` key wraps everything in a fancy figure/figcaption block:
{
""img_src"": ""https://placekitten.com/200/300"",
""caption"": ""Kitten caption""
}
Produces:
Kitten caption
## Preformatted text
You can use `{""pre"": ""text""}` to render text in a `
` HTML tag:
{
""pre"": ""This\nhas\nnewlines""
}
Produces:
This
has
newlines
If the value attached to the `""pre""` key is itself a JSON object, that JSON will be pretty-printed:
{
""pre"": {
""this"": {
""object"": [""is"", ""nested""]
}
}
}
Produces:
{
"this": {
"object": [
"is",
"nested"
]
}
}
* [Preformatted text with JSON demo](https://datasette-json-html.datasette.io/demo?sql=select+%27%7B%0D%0A++++%22pre%22%3A+%7B%0D%0A++++++++%22this%22%3A+%7B%0D%0A++++++++++++%22object%22%3A+%5B%22is%22%2C+%22nested%22%5D%0D%0A++++++++%7D%0D%0A++++%7D%0D%0A%7D%27)
* [Preformatted text demo showing the Mandelbrot Set](https://datasette-json-html.datasette.io/demo?sql=WITH+RECURSIVE%0D%0A++xaxis%28x%29+AS+%28VALUES%28-2.0%29+UNION+ALL+SELECT+x%2B0.05+FROM+xaxis+WHERE+x%3C1.2%29%2C%0D%0A++yaxis%28y%29+AS+%28VALUES%28-1.0%29+UNION+ALL+SELECT+y%2B0.1+FROM+yaxis+WHERE+y%3C1.0%29%2C%0D%0A++m%28iter%2C+cx%2C+cy%2C+x%2C+y%29+AS+%28%0D%0A++++SELECT+0%2C+x%2C+y%2C+0.0%2C+0.0+FROM+xaxis%2C+yaxis%0D%0A++++UNION+ALL%0D%0A++++SELECT+iter%2B1%2C+cx%2C+cy%2C+x*x-y*y+%2B+cx%2C+2.0*x*y+%2B+cy+FROM+m+%0D%0A+++++WHERE+%28x*x+%2B+y*y%29+%3C+4.0+AND+iter%3C28%0D%0A++%29%2C%0D%0A++m2%28iter%2C+cx%2C+cy%29+AS+%28%0D%0A++++SELECT+max%28iter%29%2C+cx%2C+cy+FROM+m+GROUP+BY+cx%2C+cy%0D%0A++%29%2C%0D%0A++a%28t%29+AS+%28%0D%0A++++SELECT+group_concat%28+substr%28%27+.%2B*%23%27%2C+1%2Bmin%28iter%2F7%2C4%29%2C+1%29%2C+%27%27%29+%0D%0A++++FROM+m2+GROUP+BY+cy%0D%0A++%29%0D%0ASELECT+json_object%28%27pre%27%2C+group_concat%28rtrim%28t%29%2Cx%270a%27%29%29+FROM+a%3B) using [this example](https://www.sqlite.org/lang_with.html#outlandish_recursive_query_examples) from the SQLite documentation
## Using these with SQLite JSON functions
The most powerful way to make use of this plugin is in conjunction with SQLite's [JSON functions](https://www.sqlite.org/json1.html). For example:
select json_object(
""href"", ""https://simonwillison.net/"",
""label"", ""Simon Willison""
);
* [json_object() link demo](https://datasette-json-html.datasette.io/demo?sql=select+json_object%28%0D%0A++++%22href%22%2C+%22https%3A%2F%2Fsimonwillison.net%2F%22%2C%0D%0A++++%22label%22%2C+%22Simon+Willison%22%0D%0A%29%3B)
You can use these functions to construct JSON objects that work with the plugin from data in a table:
select id, json_object(
""href"", url, ""label"", text
) from mytable;
* [Demo that builds links against a table](https://datasette-json-html.datasette.io/demo?sql=select+json_object%28%22href%22%2C+url%2C+%22label%22%2C+package%2C+%22title%22%2C+package+%7C%7C+%22+%22+%7C%7C+url%29+as+package+from+packages)
The `json_group_array()` function is an aggregate function similar to `group_concat()` - it allows you to construct lists of JSON objects in conjunction with a `GROUP BY` clause.
This means you can use it to construct dynamic lists of links, for example:
select
substr(package, 0, 12) as prefix,
json_group_array(
json_object(
""href"", url,
""label"", package
)
) as package_links
from packages
group by prefix
* [Demo of json_group_array()](https://datasette-json-html.datasette.io/demo?sql=select%0D%0A++++substr%28package%2C+0%2C+12%29+as+prefix%2C%0D%0A++++json_group_array%28%0D%0A++++++++json_object%28%0D%0A++++++++++++%22href%22%2C+url%2C%0D%0A++++++++++++%22label%22%2C+package%0D%0A++++++++%29%0D%0A++++%29+as+package_links%0D%0Afrom+packages%0D%0Agroup+by+prefix)
## The `urllib_quote_plus()` SQL function
Since this plugin is designed to be used with SQL that constructs the underlying JSON structure, it is likely you will need to construct dynamic URLs from results returned by a SQL query.
This plugin registers a custom SQLite function called `urllib_quote_plus()` to help you do that. It lets you use Python's [urllib.parse.quote\_plus() function](https://docs.python.org/3/library/urllib.parse.html#urllib.parse.quote_plus) from within a SQL query.
Here's an example of how you might use it:
select id, json_object(
""href"",
""/mydatabase/other_table?_search="" || urllib_quote_plus(text),
""label"", text
) from mytable;
","
datasette-json-html
Datasette plugin for rendering HTML based on JSON values, using the render_cell plugin hook.
This plugin looks for cell values that match a very specific JSON format and converts them into HTML when they are rendered by the Datasette interface.
The json_group_array() function is an aggregate function similar to group_concat() - it allows you to construct lists of JSON objects in conjunction with a GROUP BY clause.
This means you can use it to construct dynamic lists of links, for example:
select
substr(package, 0, 12) as prefix,
json_group_array(
json_object(
""href"", url,
""label"", package
)
) as package_links
from packages
group by prefix
Since this plugin is designed to be used with SQL that constructs the underlying JSON structure, it is likely you will need to construct dynamic URLs from results returned by a SQL query.
This plugin registers a custom SQLite function called urllib_quote_plus() to help you do that. It lets you use Python's urllib.parse.quote_plus() function from within a SQL query.
Here's an example of how you might use it:
select id, json_object(
""href"",
""/mydatabase/other_table?_search="" || urllib_quote_plus(text),
""label"", text
) from mytable;
",1,public,0,,,
145483077,MDEwOlJlcG9zaXRvcnkxNDU0ODMwNzc=,datasette-render-images,simonw/datasette-render-images,0,9599,https://github.com/simonw/datasette-render-images,Datasette plugin that renders binary blob images using data-uris,0,2018-08-21T00:05:47Z,2022-08-11T16:06:11Z,2022-08-11T16:06:08Z,https://datasette-render-images-demo.datasette.io/favicons/favicons,35,14,14,Python,1,1,1,1,0,2,0,0,3,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""plugin""]",2,3,14,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,2,2,"# datasette-render-images
[](https://pypi.org/project/datasette-render-images/)
[](https://github.com/simonw/datasette-render-images/releases)
[](https://github.com/simonw/datasette-render-images/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-render-images/blob/main/LICENSE)
A Datasette plugin that renders binary blob images with data-uris, using the [render_cell() plugin hook](https://docs.datasette.io/en/stable/plugins.html#render-cell-value-column-table-database-datasette).
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-render-images
## Usage
If a database row contains binary image data (PNG, GIF or JPEG), this plugin will detect that it is an image (using the [imghdr module](https://docs.python.org/3/library/imghdr.html) and render that cell using an `` element.
Here's a [demo of the plugin in action](https://datasette-render-images-demo.datasette.io/favicons/favicons).
## Creating a compatible database table
You can use the [sqlite-utils insert-files](https://sqlite-utils.datasette.io/en/stable/cli.html#inserting-data-from-files) command to insert image files into a database table:
$ pip install sqlite-utils
$ sqlite-utils insert-files gifs.db images *.gif
See [Fun with binary data and SQLite](https://simonwillison.net/2020/Jul/30/fun-binary-data-and-sqlite/) for more on this tool.
## Configuration
By default the plugin will only render images that are smaller than 100KB. You can adjust this limit using the `size_limit` plugin configuration option - for example, to increase the limit to 1MB (1000000 bytes) use the following in `metadata.json`:
```json
{
""plugins"": {
""datasette-render-images"": {
""size_limit"": 1000000
}
}
}
```
","
datasette-render-images
A Datasette plugin that renders binary blob images with data-uris, using the render_cell() plugin hook.
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-render-images
Usage
If a database row contains binary image data (PNG, GIF or JPEG), this plugin will detect that it is an image (using the imghdr module and render that cell using an <img src=""data:image/png;base64,...""> element.
By default the plugin will only render images that are smaller than 100KB. You can adjust this limit using the size_limit plugin configuration option - for example, to increase the limit to 1MB (1000000 bytes) use the following in metadata.json:
",1,public,0,,0,
163790822,MDEwOlJlcG9zaXRvcnkxNjM3OTA4MjI=,datasette-sqlite-fts4,simonw/datasette-sqlite-fts4,0,9599,https://github.com/simonw/datasette-sqlite-fts4,Datasette plugin that adds custom SQL functions for working with SQLite FTS4,0,2019-01-02T03:40:41Z,2022-07-31T16:33:25Z,2022-07-31T14:46:26Z,https://datasette.io/plugins/datasette-sqlite-fts4,14,3,3,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""plugin""]",1,0,3,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,1,2,"# datasette-sqlite-fts4
[](https://pypi.org/project/datasette-sqlite-fts4/)
[](https://github.com/simonw/datasette-sqlite-fts4/releases)
[](https://github.com/simonw/datasette-sqlite-fts4/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-sqlite-fts4/blob/main/LICENSE)
Datasette plugin that exposes the custom SQL functions from [sqlite-fts4](https://github.com/simonw/sqlite-fts4).
[Interactive demo](https://datasette-sqlite-fts4.datasette.io/24ways-fts4?sql=select%0D%0A++++json_object%28%0D%0A++++++++""label""%2C+articles.title%2C+""href""%2C+articles.url%0D%0A++++%29+as+article%2C%0D%0A++++articles.author%2C%0D%0A++++rank_score%28matchinfo%28articles_fts%2C+""pcx""%29%29+as+score%2C%0D%0A++++rank_bm25%28matchinfo%28articles_fts%2C+""pcnalx""%29%29+as+bm25%2C%0D%0A++++json_object%28%0D%0A++++++++""pre""%2C+annotate_matchinfo%28matchinfo%28articles_fts%2C+""pcxnalyb""%29%2C+""pcxnalyb""%29%0D%0A++++%29+as+annotated_matchinfo%2C%0D%0A++++matchinfo%28articles_fts%2C+""pcxnalyb""%29+as+matchinfo%2C%0D%0A++++decode_matchinfo%28matchinfo%28articles_fts%2C+""pcxnalyb""%29%29+as+decoded_matchinfo%0D%0Afrom%0D%0A++++articles_fts+join+articles+on+articles.rowid+%3D+articles_fts.rowid%0D%0Awhere%0D%0A++++articles_fts+match+%3Asearch%0D%0Aorder+by+bm25&search=jquery+maps). Read [Exploring search relevance algorithms with SQLite](https://simonwillison.net/2019/Jan/7/exploring-search-relevance-algorithms-sqlite/) for further details on this project.
## Installation
pip install datasette-sqlite-fts4
If you are deploying a database using `datasette publish` you can include this plugin using the `--install` option:
datasette publish now mydb.db --install=datasette-sqlite-fts4
","
datasette-sqlite-fts4
Datasette plugin that exposes the custom SQL functions from sqlite-fts4.
If you are deploying a database using datasette publish you can include this plugin using the --install option:
datasette publish now mydb.db --install=datasette-sqlite-fts4
",1,public,0,,0,
166159072,MDEwOlJlcG9zaXRvcnkxNjYxNTkwNzI=,db-to-sqlite,simonw/db-to-sqlite,0,9599,https://github.com/simonw/db-to-sqlite,CLI tool for exporting tables or queries from any SQL database to a SQLite file,0,2019-01-17T04:16:48Z,2021-06-11T22:52:12Z,2021-06-11T22:55:56Z,,77,226,226,Python,1,1,1,1,0,12,0,0,2,apache-2.0,"[""sqlalchemy"", ""sqlite"", ""datasette"", ""datasette-io"", ""datasette-tool""]",12,2,226,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,12,4,"# db-to-sqlite
[](https://pypi.python.org/pypi/db-to-sqlite)
[](https://github.com/simonw/db-to-sqlite/releases)
[](https://github.com/simonw/db-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/db-to-sqlite/blob/main/LICENSE)
CLI tool for exporting tables or queries from any SQL database to a SQLite file.
## Installation
Install from PyPI like so:
pip install db-to-sqlite
If you want to use it with MySQL, you can install the extra dependency like this:
pip install 'db-to-sqlite[mysql]'
Installing the `mysqlclient` library on OS X can be tricky - I've found [this recipe](https://gist.github.com/simonw/90ac0afd204cd0d6d9c3135c3888d116) to work (run that before installing `db-to-sqlite`).
For PostgreSQL, use this:
pip install 'db-to-sqlite[postgresql]'
## Usage
Usage: db-to-sqlite [OPTIONS] CONNECTION PATH
Load data from any database into SQLite.
PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db
CONNECTION is a SQLAlchemy connection string, for example:
postgresql://localhost/my_database
postgresql://username:passwd@localhost/my_database
mysql://root@localhost/my_database
mysql://username:passwd@localhost/my_database
More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls
Options:
--version Show the version and exit.
--all Detect and copy all tables
--table TEXT Specific tables to copy
--skip TEXT When using --all skip these tables
--redact TEXT... (table, column) pairs to redact with ***
--sql TEXT Optional SQL query to run
--output TEXT Table in which to save --sql query results
--pk TEXT Optional column to use as a primary key
--index-fks / --no-index-fks Should foreign keys have indexes? Default on
-p, --progress Show progress bar
--postgres-schema TEXT PostgreSQL schema to use
--help Show this message and exit.
For example, to save the content of the `blog_entry` table from a PostgreSQL database to a local file called `blog.db` you could do this:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--table=blog_entry
You can specify `--table` more than once.
You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--all
When running `--all` you can specify tables to skip using `--skip`:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--all \
--skip=django_migrations
If you want to save the results of a custom SQL query, do this:
db-to-sqlite ""postgresql://localhost/myblog"" output.db \
--output=query_results \
--sql=""select id, title, created from blog_entry"" \
--pk=id
The `--output` option specifies the table that should contain the results of the query.
## Using db-to-sqlite with PostgreSQL schemas
If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the `--postgres-schema` option:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--all \
--postgres-schema my_schema
## Using db-to-sqlite with Heroku Postgres
If you run an application on [Heroku](https://www.heroku.com/) using their [Postgres database product](https://www.heroku.com/postgres), you can use the `heroku config` command to access a compatible connection string:
$ heroku config --app myappname | grep HEROKU_POSTG
HEROKU_POSTGRESQL_OLIVE_URL: postgres://username:password@ec2-xxx-xxx-xxx-x.compute-1.amazonaws.com:5432/dbname
You can pass this to `db-to-sqlite` to create a local SQLite database with the data from your Heroku instance.
You can even do this using a bash one-liner:
$ db-to-sqlite $(heroku config --app myappname | grep HEROKU_POSTG | cut -d: -f 2-) \
/tmp/heroku.db --all -p
1/23: django_migrations
...
17/23: blog_blogmark
[####################################] 100%
...
## Related projects
* [Datasette](https://github.com/simonw/datasette): A tool for exploring and publishing data. Works great with SQLite files generated using `db-to-sqlite`.
* [sqlite-utils](https://github.com/simonw/sqlite-utils): Python CLI utility and library for manipulating SQLite databases.
* [csvs-to-sqlite](https://github.com/simonw/csvs-to-sqlite): Convert CSV files into a SQLite database.
## Development
To set up this tool locally, first checkout the code. Then create a new virtual environment:
cd db-to-sqlite
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed.
You can install those extra dependencies like so:
pip install -e '.[test_mysql,test_postgresql]'
You can alternative use `pip install psycopg2-binary` if you cannot install the `psycopg2` dependency used by the `test_postgresql` extra.
See [Running a MySQL server using Homebrew](https://til.simonwillison.net/homebrew/mysql-homebrew) for tips on running the tests against MySQL on macOS, including how to install the `mysqlclient` dependency.
The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers:
- `MYSQL_TEST_DB_CONNECTION` - defaults to `mysql://root@localhost/test_db_to_sqlite`
- `POSTGRESQL_TEST_DB_CONNECTION` - defaults to `postgresql://localhost/test_db_to_sqlite`
The database you indicate in the environment variable - `test_db_to_sqlite` by default - will be deleted and recreated on every test run.
","
db-to-sqlite
CLI tool for exporting tables or queries from any SQL database to a SQLite file.
Installation
Install from PyPI like so:
pip install db-to-sqlite
If you want to use it with MySQL, you can install the extra dependency like this:
pip install 'db-to-sqlite[mysql]'
Installing the mysqlclient library on OS X can be tricky - I've found this recipe to work (run that before installing db-to-sqlite).
For PostgreSQL, use this:
pip install 'db-to-sqlite[postgresql]'
Usage
Usage: db-to-sqlite [OPTIONS] CONNECTION PATH
Load data from any database into SQLite.
PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db
CONNECTION is a SQLAlchemy connection string, for example:
postgresql://localhost/my_database
postgresql://username:passwd@localhost/my_database
mysql://root@localhost/my_database
mysql://username:passwd@localhost/my_database
More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls
Options:
--version Show the version and exit.
--all Detect and copy all tables
--table TEXT Specific tables to copy
--skip TEXT When using --all skip these tables
--redact TEXT... (table, column) pairs to redact with ***
--sql TEXT Optional SQL query to run
--output TEXT Table in which to save --sql query results
--pk TEXT Optional column to use as a primary key
--index-fks / --no-index-fks Should foreign keys have indexes? Default on
-p, --progress Show progress bar
--postgres-schema TEXT PostgreSQL schema to use
--help Show this message and exit.
For example, to save the content of the blog_entry table from a PostgreSQL database to a local file called blog.db you could do this:
You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example:
If you want to save the results of a custom SQL query, do this:
db-to-sqlite ""postgresql://localhost/myblog"" output.db \
--output=query_results \
--sql=""select id, title, created from blog_entry"" \
--pk=id
The --output option specifies the table that should contain the results of the query.
Using db-to-sqlite with PostgreSQL schemas
If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the --postgres-schema option:
If you run an application on Heroku using their Postgres database product, you can use the heroku config command to access a compatible connection string:
Datasette: A tool for exploring and publishing data. Works great with SQLite files generated using db-to-sqlite.
sqlite-utils: Python CLI utility and library for manipulating SQLite databases.
csvs-to-sqlite: Convert CSV files into a SQLite database.
Development
To set up this tool locally, first checkout the code. Then create a new virtual environment:
cd db-to-sqlite
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed.
You can install those extra dependencies like so:
pip install -e '.[test_mysql,test_postgresql]'
You can alternative use pip install psycopg2-binary if you cannot install the psycopg2 dependency used by the test_postgresql extra.
See Running a MySQL server using Homebrew for tips on running the tests against MySQL on macOS, including how to install the mysqlclient dependency.
The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers:
MYSQL_TEST_DB_CONNECTION - defaults to mysql://root@localhost/test_db_to_sqlite
POSTGRESQL_TEST_DB_CONNECTION - defaults to postgresql://localhost/test_db_to_sqlite
The database you indicate in the environment variable - test_db_to_sqlite by default - will be deleted and recreated on every test run.
",,,,,,
167730071,MDEwOlJlcG9zaXRvcnkxNjc3MzAwNzE=,datasette-pretty-json,simonw/datasette-pretty-json,0,9599,https://github.com/simonw/datasette-pretty-json,Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays,0,2019-01-26T19:30:43Z,2022-09-24T06:13:11Z,2022-09-28T21:06:31Z,,14,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""json""]",0,1,8,master,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-pretty-json
[](https://pypi.org/project/datasette-pretty-json/)
[](https://github.com/simonw/datasette-pretty-json/releases)
[](https://github.com/simonw/datasette-pretty-json/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-pretty-json/blob/main/LICENSE)
[Datasette](https://github.com/simonw/datasette) plugin that pretty-prints any column values that are valid JSON objects or arrays.
You may also be interested in [datasette-json-html](https://github.com/simonw/datasette-json-html).
","
datasette-pretty-json
Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays.
",1,public,0,,0,
167759846,MDEwOlJlcG9zaXRvcnkxNjc3NTk4NDY=,markdown-to-sqlite,simonw/markdown-to-sqlite,0,9599,https://github.com/simonw/markdown-to-sqlite,CLI tool for loading markdown files into a SQLite database,0,2019-01-27T02:04:54Z,2022-05-13T18:09:26Z,2022-05-13T18:09:22Z,,13,49,49,Python,1,1,1,1,0,2,0,0,2,apache-2.0,"[""datasette-io"", ""datasette-tool"", ""markdown"", ""sqlite"", ""yaml""]",2,2,49,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,2,3,"# markdown-to-sqlite
[](https://pypi.python.org/pypi/markdown-to-sqlite)
[](https://github.com/simonw/markdown-to-sqlite/releases)
[](https://github.com/simonw/markdown-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/markdown-to-sqlite/blob/main/LICENSE)
CLI tool for loading markdown files into a SQLite database.
YAML embedded in the markdown files will be used to populate additional columns.
Usage: markdown-to-sqlite [OPTIONS] DBNAME TABLE PATHS...
For example:
$ markdown-to-sqlite docs.db documents file1.md file2.md
## Breaking change
Prior to version 1.0 this argument order was different - markdown files were listed before the database and table.
","
markdown-to-sqlite
CLI tool for loading markdown files into a SQLite database.
YAML embedded in the markdown files will be used to populate additional columns.
Prior to version 1.0 this argument order was different - markdown files were listed before the database and table.
",1,public,0,,,
168474970,MDEwOlJlcG9zaXRvcnkxNjg0NzQ5NzA=,dbf-to-sqlite,simonw/dbf-to-sqlite,0,9599,https://github.com/simonw/dbf-to-sqlite,"CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite",0,2019-01-31T06:30:46Z,2021-03-23T01:29:41Z,2020-02-16T00:41:20Z,,8,25,25,Python,1,1,1,1,0,8,0,0,3,apache-2.0,"[""sqlite"", ""foxpro"", ""dbf"", ""dbase"", ""datasette-io"", ""datasette-tool""]",8,3,25,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,8,2,"# dbf-to-sqlite
[](https://pypi.python.org/pypi/dbf-to-sqlite)
[](https://travis-ci.com/simonw/dbf-to-sqlite)
[](https://github.com/simonw/dbf-to-sqlite/blob/master/LICENSE)
CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite.
## Installation
pip install dbf-to-sqlite
## Usage
$ dbf-to-sqlite --help
Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB
Convert DBF files (dBase, FoxPro etc) to SQLite
https://github.com/simonw/dbf-to-sqlite
Options:
--version Show the version and exit.
--table TEXT Table name to use (only valid for single files)
-v, --verbose Show what's going on
--help Show this message and exit.
Example usage:
$ dbf-to-sqlite *.DBF database.db
This will create a new SQLite database called `database.db` containing one table for each of the `DBF` files in the current directory.
Looking for DBF files to try this out on? Try downloading the [Himalayan Database](http://himalayandatabase.com/) of all expeditions that have climbed in the Nepal Himalaya.
","
dbf-to-sqlite
CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite.
Installation
pip install dbf-to-sqlite
Usage
$ dbf-to-sqlite --help
Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB
Convert DBF files (dBase, FoxPro etc) to SQLite
https://github.com/simonw/dbf-to-sqlite
Options:
--version Show the version and exit.
--table TEXT Table name to use (only valid for single files)
-v, --verbose Show what's going on
--help Show this message and exit.
Example usage:
$ dbf-to-sqlite *.DBF database.db
This will create a new SQLite database called database.db containing one table for each of the DBF files in the current directory.
Looking for DBF files to try this out on? Try downloading the Himalayan Database of all expeditions that have climbed in the Nepal Himalaya.
",,,,,,
174715153,MDEwOlJlcG9zaXRvcnkxNzQ3MTUxNTM=,datasette-jellyfish,simonw/datasette-jellyfish,0,9599,https://github.com/simonw/datasette-jellyfish,Datasette plugin adding SQL functions for fuzzy text matching powered by Jellyfish,0,2019-03-09T16:02:01Z,2021-02-06T02:33:49Z,2021-02-06T02:34:18Z,https://datasette.io/plugins/datasette-jellyfish,15,9,9,Python,1,1,1,1,0,2,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",2,0,9,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# datasette-jellyfish
[](https://pypi.org/project/datasette-jellyfish/)
[](https://github.com/simonw/datasette-jellyfish/releases)
[](https://github.com/simonw/datasette-jellyfish/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-jellyfish/blob/main/LICENSE)
Datasette plugin that adds custom SQL functions for fuzzy string matching, built on top of the [Jellyfish](https://github.com/jamesturk/jellyfish) Python library by James Turk and Michael Stephens.
Interactive demos:
* [soundex, metaphone, nysiis, match_rating_codex comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++soundex%28%3As%29%2C+%0D%0A++++metaphone%28%3As%29%2C+%0D%0A++++nysiis%28%3As%29%2C+%0D%0A++++match_rating_codex%28%3As%29&s=demo).
* [distance functions comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++damerau_levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++hamming_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_winkler_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++match_rating_comparison%28%3As1%2C+%3As2%29%3B&s1=barrack+obama&s2=barrack+h+obama)
Examples:
SELECT soundex(""hello"");
-- Outputs H400
SELECT metaphone(""hello"");
-- Outputs HL
SELECT nysiis(""hello"");
-- Outputs HAL
SELECT match_rating_codex(""hello"");
-- Outputs HLL
SELECT porter_stem(""running"");
-- Outputs run
SELECT levenshtein_distance(""hello"", ""hello world"");
-- Outputs 6
SELECT damerau_levenshtein_distance(""hello"", ""hello world"");
-- Outputs 6
SELECT hamming_distance(""hello"", ""hello world"");
-- Outputs 6
SELECT jaro_similarity(""hello"", ""hello world"");
-- Outputs 0.8181818181818182
SELECT jaro_winkler_similarity(""hello"", ""hello world"");
-- Outputs 0.890909090909091
SELECT match_rating_comparison(""hello"", ""helloo"");
-- Outputs 1
See [the Jellyfish documentation](https://jellyfish.readthedocs.io/en/latest/) for an explanation of each of these functions.","
datasette-jellyfish
Datasette plugin that adds custom SQL functions for fuzzy string matching, built on top of the Jellyfish Python library by James Turk and Michael Stephens.
",,,,,,
175321497,MDEwOlJlcG9zaXRvcnkxNzUzMjE0OTc=,csv-diff,simonw/csv-diff,0,9599,https://github.com/simonw/csv-diff,Python CLI tool and library for diffing CSV and JSON files,0,2019-03-13T01:11:26Z,2022-07-29T20:01:02Z,2022-07-29T20:00:59Z,,34,198,198,Python,1,1,1,1,0,29,0,0,18,apache-2.0,"[""click"", ""csv"", ""datasette-io"", ""datasette-tool"", ""diff"", ""git-scraping""]",29,18,198,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,29,7,"# csv-diff
[](https://pypi.org/project/csv-diff/)
[](https://github.com/simonw/csv-diff/releases)
[](https://github.com/simonw/csv-diff/actions?query=workflow%3ATest)
[](https://github.com/simonw/csv-diff/blob/main/LICENSE)
Tool for viewing the difference between two CSV, TSV or JSON files. See [Generating a commit log for San Francisco’s official list of trees](https://simonwillison.net/2019/Mar/13/tree-history/) (and the [sf-tree-history repo commit log](https://github.com/simonw/sf-tree-history/commits)) for background information on this project.
## Installation
pip install csv-diff
## Usage
Consider two CSV files:
`one.csv`
id,name,age
1,Cleo,4
2,Pancakes,2
`two.csv`
id,name,age
1,Cleo,5
3,Bailey,1
`csv-diff` can show a human-readable summary of differences between the files:
$ csv-diff one.csv two.csv --key=id
1 row changed, 1 row added, 1 row removed
1 row changed
Row 1
age: ""4"" => ""5""
1 row added
id: 3
name: Bailey
age: 1
1 row removed
id: 2
name: Pancakes
age: 2
The `--key=id` option means that the `id` column should be treated as the unique key, to identify which records have changed.
The tool will automatically detect if your files are comma- or tab-separated. You can over-ride this automatic detection and force the tool to use a specific format using `--format=tsv` or `--format=csv`.
You can also feed it JSON files, provided they are a JSON array of objects where each object has the same keys. Use `--format=json` if your input files are JSON.
Use `--show-unchanged` to include full details of the unchanged values for rows with at least one change in the diff output:
% csv-diff one.csv two.csv --key=id --show-unchanged
1 row changed
id: 1
age: ""4"" => ""5""
Unchanged:
name: ""Cleo""
You can use the `--json` option to get a machine-readable difference:
$ csv-diff one.csv two.csv --key=id --json
{
""added"": [
{
""id"": ""3"",
""name"": ""Bailey"",
""age"": ""1""
}
],
""removed"": [
{
""id"": ""2"",
""name"": ""Pancakes"",
""age"": ""2""
}
],
""changed"": [
{
""key"": ""1"",
""changes"": {
""age"": [
""4"",
""5""
]
}
}
],
""columns_added"": [],
""columns_removed"": []
}
## As a Python library
You can also import the Python library into your own code like so:
from csv_diff import load_csv, compare
diff = compare(
load_csv(open(""one.csv""), key=""id""),
load_csv(open(""two.csv""), key=""id"")
)
`diff` will now contain the same data structure as the output in the `--json` example above.
If the columns in the CSV have changed, those added or removed columns will be ignored when calculating changes made to specific rows.
## As a Docker container
### Build the image
$ docker build -t csvdiff .
### Run the container
$ docker run --rm -v $(pwd):/files csvdiff
Suppose current directory contains two csv files : one.csv two.csv
$ docker run --rm -v $(pwd):/files csvdiff one.csv two.csv
## Alternatives
- [csvdiff](https://github.com/aswinkarthik/csvdiff) is a ""fast diff tool for comparing CSV files"" - you may get better results from this than from `csv-diff` against larger files.
","
The --key=id option means that the id column should be treated as the unique key, to identify which records have changed.
The tool will automatically detect if your files are comma- or tab-separated. You can over-ride this automatic detection and force the tool to use a specific format using --format=tsv or --format=csv.
You can also feed it JSON files, provided they are a JSON array of objects where each object has the same keys. Use --format=json if your input files are JSON.
Use --show-unchanged to include full details of the unchanged values for rows with at least one change in the diff output:
diff will now contain the same data structure as the output in the --json example above.
If the columns in the CSV have changed, those added or removed columns will be ignored when calculating changes made to specific rows.
As a Docker container
Build the image
$ docker build -t csvdiff .
Run the container
$ docker run --rm -v $(pwd):/files csvdiff
Suppose current directory contains two csv files : one.csv two.csv
$ docker run --rm -v $(pwd):/files csvdiff one.csv two.csv
Alternatives
csvdiff is a ""fast diff tool for comparing CSV files"" - you may get better results from this than from csv-diff against larger files.
",1,public,0,,0,
175550127,MDEwOlJlcG9zaXRvcnkxNzU1NTAxMjc=,yaml-to-sqlite,simonw/yaml-to-sqlite,0,9599,https://github.com/simonw/yaml-to-sqlite,Utility for converting YAML files to SQLite,0,2019-03-14T04:49:08Z,2021-06-13T09:04:40Z,2021-06-13T04:45:52Z,,19,36,36,Python,1,1,1,1,0,2,0,0,0,apache-2.0,"[""yaml"", ""sqlite"", ""datasette-io"", ""datasette-tool""]",2,0,36,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# yaml-to-sqlite
[](https://pypi.org/project/yaml-to-sqlite/)
[](https://github.com/simonw/yaml-to-sqlite/releases)
[](https://github.com/simonw/yaml-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/yaml-to-sqlite/blob/main/LICENSE)
Load the contents of a YAML file into a SQLite database table.
```
$ yaml-to-sqlite --help
Usage: yaml-to-sqlite [OPTIONS] DB_PATH TABLE YAML_FILE
Convert YAML files to SQLite
Options:
--version Show the version and exit.
--pk TEXT Column to use as a primary key
--single-column TEXT If YAML file is a list of values, populate this column
--help Show this message and exit.
```
## Usage
Given a `news.yml` file containing the following:
```yaml
- date: 2021-06-05
body: |-
[Datasette 0.57](https://docs.datasette.io/en/stable/changelog.html#v0-57) is out with an important security patch.
- date: 2021-05-10
body: |-
[Django SQL Dashboard](https://simonwillison.net/2021/May/10/django-sql-dashboard/) is a new tool that brings a useful authenticated subset of Datasette to Django projects that are built on top of PostgreSQL.
```
Running this command:
```bash
$ yaml-to-sqlite news.db stories news.yml
```
Will create a database file with this schema:
```bash
$ sqlite-utils schema news.db
CREATE TABLE [stories] (
[date] TEXT,
[body] TEXT
);
```
The `--pk` option can be used to set a column as the primary key for the table:
```bash
$ yaml-to-sqlite news.db stories news.yml --pk date
$ sqlite-utils schema news.db
CREATE TABLE [stories] (
[date] TEXT PRIMARY KEY,
[body] TEXT
);
```
## Single column YAML lists
The `--single-column` option can be used when the YAML file is a list of values, for example a file called `dogs.yml` containing the following:
```yaml
- Cleo
- Pancakes
- Nixie
```
Running this command:
```bash
$ yaml-to-sqlite dogs.db dogs.yaml --single-column=name
```
Will create a single `dogs` table with a single `name` column that is the primary key:
```bash
$ sqlite-utils schema dogs.db
CREATE TABLE [dogs] (
[name] TEXT PRIMARY KEY
);
$ sqlite-utils dogs.db 'select * from dogs' -t
name
--------
Cleo
Pancakes
Nixie
```
","
yaml-to-sqlite
Load the contents of a YAML file into a SQLite database table.
$ yaml-to-sqlite --help
Usage: yaml-to-sqlite [OPTIONS] DB_PATH TABLE YAML_FILE
Convert YAML files to SQLite
Options:
--version Show the version and exit.
--pk TEXT Column to use as a primary key
--single-column TEXT If YAML file is a list of values, populate this column
--help Show this message and exit.
Usage
Given a news.yml file containing the following:
- date: 2021-06-05body: |- [Datasette 0.57](https://docs.datasette.io/en/stable/changelog.html#v0-57) is out with an important security patch.
- date: 2021-05-10body: |- [Django SQL Dashboard](https://simonwillison.net/2021/May/10/django-sql-dashboard/) is a new tool that brings a useful authenticated subset of Datasette to Django projects that are built on top of PostgreSQL.
Will create a single dogs table with a single name column that is the primary key:
$ sqlite-utils schema dogs.db
CREATE TABLE [dogs] (
[name] TEXT PRIMARY KEY
);
$ sqlite-utils dogs.db 'select * from dogs' -t
name
--------
Cleo
Pancakes
Nixie
",,,,,,
184168864,MDEwOlJlcG9zaXRvcnkxODQxNjg4NjQ=,datasette-render-html,simonw/datasette-render-html,0,9599,https://github.com/simonw/datasette-render-html,Plugin for selectively rendering the HTML is specific columns,0,2019-04-30T01:21:25Z,2020-09-24T04:44:47Z,2021-03-17T03:57:13Z,,8,2,2,Python,1,1,1,1,0,2,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",2,1,2,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# datasette-render-html
[](https://pypi.org/project/datasette-render-html/)
[](https://circleci.com/gh/simonw/datasette-render-html)
[](https://github.com/simonw/datasette-render-html/blob/master/LICENSE)
This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces.
This means you can store HTML in those columns and have it rendered as such on those pages.
If you have a database called `docs.db` containing a `glossary` table and you want the `definition` column in that table to be rendered as HTML, you would use a `metadata.json` file that looks like this:
{
""databases"": {
""docs"": {
""tables"": {
""glossary"": {
""plugins"": {
""datasette-render-html"": {
""columns"": [""definition""]
}
}
}
}
}
}
}
## Security
This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an [XSS attack](https://owasp.org/www-community/attacks/xss/).
It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML.
As such, you should only use this plugin against specific columns in specific tables, as shown in the example above.
","
datasette-render-html
This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces.
This means you can store HTML in those columns and have it rendered as such on those pages.
If you have a database called docs.db containing a glossary table and you want the definition column in that table to be rendered as HTML, you would use a metadata.json file that looks like this:
This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an XSS attack.
It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML.
As such, you should only use this plugin against specific columns in specific tables, as shown in the example above.
",,,,,,
189321671,MDEwOlJlcG9zaXRvcnkxODkzMjE2NzE=,datasette-jq,simonw/datasette-jq,0,9599,https://github.com/simonw/datasette-jq,Datasette plugin that adds a custom SQL function for executing jq expressions against JSON values,0,2019-05-30T01:06:31Z,2020-12-24T17:35:27Z,2020-04-09T05:43:43Z,,11,10,10,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""jq"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,10,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-jq
[](https://pypi.org/project/datasette-jq/)
[](https://circleci.com/gh/simonw/datasette-jq)
[](https://github.com/simonw/datasette-jq/blob/master/LICENSE)
Datasette plugin that adds custom SQL functions for executing [jq](https://stedolan.github.io/jq/) expressions against JSON values.
Install this plugin in the same environment as Datasette to enable the `jq()` SQL function.
Usage:
select jq(
column_with_json,
""{top_3: .classifiers[:3], v: .version}""
)
See [the jq manual](https://stedolan.github.io/jq/manual/#Basicfilters) for full details of supported expression syntax.
## Interactive demo
You can try this plugin out at [datasette-jq-demo.datasette.io](https://datasette-jq-demo.datasette.io/)
Sample query:
select package, ""https://pypi.org/project/"" || package || ""/"" as url,
jq(info, ""{summary: .info.summary, author: .info.author, versions: .releases|keys|reverse}"")
from packages
[Try this query out](https://datasette-jq-demo.datasette.io/demo?sql=select+package%2C+%22https%3A%2F%2Fpypi.org%2Fproject%2F%22+%7C%7C+package+%7C%7C+%22%2F%22+as+url%2C%0D%0Ajq%28info%2C+%22%7Bsummary%3A+.info.summary%2C+author%3A+.info.author%2C+versions%3A+.releases%7Ckeys%7Creverse%7D%22%29%0D%0Afrom+packages) in the interactive demo.
","
datasette-jq
Datasette plugin that adds custom SQL functions for executing jq expressions against JSON values.
Install this plugin in the same environment as Datasette to enable the jq() SQL function.
",,,,,,
190950781,MDEwOlJlcG9zaXRvcnkxOTA5NTA3ODE=,datasette-bplist,simonw/datasette-bplist,0,9599,https://github.com/simonw/datasette-bplist,Datasette plugin for working with Apple's binary plist format,0,2019-06-09T01:15:01Z,2021-06-07T18:05:00Z,2019-06-09T01:17:19Z,,7,9,9,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""bplist"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,9,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,0,"# datasette-bplist
[](https://pypi.org/project/datasette-bplist/)
[](https://circleci.com/gh/simonw/datasette-bplist)
[](https://github.com/simonw/datasette-bplist/blob/master/LICENSE)
Datasette plugin for working with Apple's [binary plist](https://en.wikipedia.org/wiki/Property_list) format.
This plugin adds two features: a display hook and a SQL function.
The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI.
The SQL function `bplist_to_json(value)` can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's `json_extract()` function or with the [datasette-jq](https://github.com/simonw/datasette-jq) plugin to further analyze that data as part of a SQL query.
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-bplist
## Trying it out
If you use a Mac you already have plenty of SQLite databases that contain binary plist data.
One example is the database that powers the Apple Photos app.
This database tends to be locked, so you will need to create a copy of the database in order to run queries against it:
cp ~/Pictures/Photos\ Library.photoslibrary/database/photos.db /tmp/photos.db
The database also makes use of custom SQLite extensions which prevent it from opening in Datasette.
You can work around this by exporting the data that you want to experiment with into a new SQLite file.
I recommend trying this plugin against the `RKMaster_dataNote` table, which contains plist-encoded EXIF metadata about the photos you have taken.
You can export that table into a fresh database like so:
sqlite3 /tmp/photos.db "".dump RKMaster_dataNote"" | sqlite3 /tmp/exif.db
Now run `datasette /tmp/exif.db` and you can start trying out the plugin.
## Using the bplist_to_json() SQL function
Once you have the `exif.db` demo working, you can try the `bplist_to_json()` SQL function.
Here's a query that shows the camera lenses you have used the most often to take photos:
select
json_extract(
bplist_to_json(value),
""$.{Exif}.LensModel""
) as lens,
count(*) as n
from RKMaster_dataNote
group by lens
order by n desc;
If you have a large number of photos this query can take a long time to execute, so you may need to increase the SQL time limit enforced by Datasette like so:
$ datasette /tmp/exif.db \
--config sql_time_limit_ms:10000
Here's another query, showing the time at which you took every photo in your library which is classified as as screenshot:
select
attachedToId,
json_extract(
bplist_to_json(value),
""$.{Exif}.DateTimeOriginal""
)
from RKMaster_dataNote
where
json_extract(
bplist_to_json(value),
""$.{Exif}.UserComment""
) = ""Screenshot""
And if you install the [datasette-cluster-map](https://github.com/simonw/datasette-cluster-map) plugin, this query will show you a map of your most recent 1000 photos:
select
*,
json_extract(
bplist_to_json(value),
""$.{GPS}.Latitude""
) as latitude,
-json_extract(
bplist_to_json(value),
""$.{GPS}.Longitude""
) as longitude,
json_extract(
bplist_to_json(value),
""$.{Exif}.DateTimeOriginal""
) as datetime
from
RKMaster_dataNote
where
latitude is not null
order by
attachedToId desc
","
datasette-bplist
Datasette plugin for working with Apple's binary plist format.
This plugin adds two features: a display hook and a SQL function.
The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI.
The SQL function bplist_to_json(value) can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's json_extract() function or with the datasette-jq plugin to further analyze that data as part of a SQL query.
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-bplist
Trying it out
If you use a Mac you already have plenty of SQLite databases that contain binary plist data.
One example is the database that powers the Apple Photos app.
This database tends to be locked, so you will need to create a copy of the database in order to run queries against it:
Now run datasette /tmp/exif.db and you can start trying out the plugin.
Using the bplist_to_json() SQL function
Once you have the exif.db demo working, you can try the bplist_to_json() SQL function.
Here's a query that shows the camera lenses you have used the most often to take photos:
select
json_extract(
bplist_to_json(value),
""$.{Exif}.LensModel""
) as lens,
count(*) as n
from RKMaster_dataNote
group by lens
order by n desc;
If you have a large number of photos this query can take a long time to execute, so you may need to increase the SQL time limit enforced by Datasette like so:
Here's another query, showing the time at which you took every photo in your library which is classified as as screenshot:
select
attachedToId,
json_extract(
bplist_to_json(value),
""$.{Exif}.DateTimeOriginal""
)
from RKMaster_dataNote
where
json_extract(
bplist_to_json(value),
""$.{Exif}.UserComment""
) = ""Screenshot""
And if you install the datasette-cluster-map plugin, this query will show you a map of your most recent 1000 photos:
select
*,
json_extract(
bplist_to_json(value),
""$.{GPS}.Latitude""
) as latitude,
-json_extract(
bplist_to_json(value),
""$.{GPS}.Longitude""
) as longitude,
json_extract(
bplist_to_json(value),
""$.{Exif}.DateTimeOriginal""
) as datetime
from
RKMaster_dataNote
where
latitude is not null
order by
attachedToId desc
",,,,,,
191022928,MDEwOlJlcG9zaXRvcnkxOTEwMjI5Mjg=,datasette-render-binary,simonw/datasette-render-binary,0,9599,https://github.com/simonw/datasette-render-binary,Datasette plugin for rendering binary data,0,2019-06-09T15:25:52Z,2021-06-02T09:29:20Z,2019-06-13T16:14:31Z,,62,7,7,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,7,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-render-binary
[](https://pypi.org/project/datasette-render-binary/)
[](https://circleci.com/gh/simonw/datasette-render-binary)
[](https://github.com/simonw/datasette-render-binary/blob/master/LICENSE)
Datasette plugin for rendering binary data.
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-render-binary
Binary data in cells will now be rendered as a mixture of characters and octets.

","
datasette-render-binary
Datasette plugin for rendering binary data.
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-render-binary
Binary data in cells will now be rendered as a mixture of characters and octets.
",,,,,,
195087137,MDEwOlJlcG9zaXRvcnkxOTUwODcxMzc=,datasette-auth-github,simonw/datasette-auth-github,0,9599,https://github.com/simonw/datasette-auth-github,Datasette plugin that authenticates users against GitHub,0,2019-07-03T16:02:53Z,2021-06-03T11:42:54Z,2021-02-25T06:40:17Z,https://datasette-auth-github-demo.datasette.io/,119,34,34,Python,1,1,1,1,0,4,0,0,3,apache-2.0,"[""asgi"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",4,3,34,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,4,1,"# datasette-auth-github
[](https://pypi.org/project/datasette-auth-github/)
[](https://github.com/simonw/datasette-auth-github/releases)
[](https://github.com/simonw/datasette-auth-github/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-auth-github/blob/main/LICENSE)
Datasette plugin that authenticates users against GitHub.
- [Setup instructions](#setup-instructions)
- [The authenticated actor](#the-authenticated-actor)
- [Restricting access to specific users](#restricting-access-to-specific-users)
- [Restricting access to specific GitHub organizations or teams](#restricting-access-to-specific-github-organizations-or-teams)
- [What to do if a user is removed from an organization or team](#what-to-do-if-a-user-is-removed-from-an-organization-or-team)
## Setup instructions
* Install the plugin: `datasette install datasette-auth-github`
* Create a GitHub OAuth app: https://github.com/settings/applications/new
* Set the Authorization callback URL to `http://127.0.0.1:8001/-/github-auth-callback`
* Create a `metadata.json` file with the following structure:
```json
{
""title"": ""datasette-auth-github demo"",
""plugins"": {
""datasette-auth-github"": {
""client_id"": {""$env"": ""GITHUB_CLIENT_ID""},
""client_secret"": {""$env"": ""GITHUB_CLIENT_SECRET""}
}
}
}
```
Now you can start Datasette like this, passing in the secrets as environment variables:
$ GITHUB_CLIENT_ID=XXX GITHUB_CLIENT_SECRET=YYY datasette \
fixtures.db -m metadata.json
Note that hard-coding secrets in `metadata.json` is a bad idea as they will be visible to anyone who can navigate to `/-/metadata`. Instead, we use Datasette's mechanism for [adding secret plugin configuration options](https://docs.datasette.io/en/stable/plugins.html#secret-configuration-values).
By default anonymous users will still be able to interact with Datasette. If you wish all users to have to sign in with a GitHub account first, add this to your ``metadata.json``:
```json
{
""allow"": {
""id"": ""*""
},
""plugins"": {
""datasette-auth-github"": {
""..."": ""...""
}
}
}
```
## The authenticated actor
Visit `/-/actor` when signed in to see the shape of the authenticated actor. It should look something like this:
```json
{
""actor"": {
""display"": ""simonw"",
""gh_id"": ""9599"",
""gh_name"": ""Simon Willison"",
""gh_login"": ""simonw"",
""gh_email"": ""..."",
""gh_orgs"": [
""dogsheep"",
""datasette-project""
],
""gh_teams"": [
""dogsheep/test""
]
}
}
```
The `gh_orgs` and `gh_teams` properties will only be present if you used `load_teams` or `load_orgs`, documented below.
## Restricting access to specific users
You can use Datasette's [permissions mechanism](https://docs.datasette.io/en/stable/authentication.html) to specify which user or users are allowed to access your instance. Here's how to restrict access to just GitHub user `simonw`:
```json
{
""allow"": {
""gh_login"": ""simonw""
},
""plugins"": {
""datasette-auth-github"": {
""..."": ""...""
}
}
}
```
This `""allow""` block can be positioned at the database, table or query level instead: see [Configuring permissions in metadata.json](https://docs.datasette.io/en/stable/authentication.html#configuring-permissions-in-metadata-json) for details.
Note that GitHub allows users to change their username, and it is possible for other people to claim old usernames. If you are concerned that your users may change their usernames you can key the allow blocks against GitHub user IDs instead, which do not change:
```json
{
""allow"": {
""gh_id"": ""9599""
}
}
```
## Restricting access to specific GitHub organizations or teams
You can also restrict access to users who are members of a specific GitHub organization.
You'll need to configure the plugin to check if the user is a member of that organization when they first sign in. You can do that using the `""load_orgs""` plugin configuration option.
Then you can use `""allow"": {""gh_orgs"": [...]}` to specify which organizations are allowed access.
```json
{
""plugins"": {
""datasette-auth-github"": {
""..."": ""..."",
""load_orgs"": [""your-organization""]
}
},
""allow"": {
""gh_orgs"": ""your-organization""
}
}
```
If your organization is [arranged into teams](https://help.github.com/en/articles/organizing-members-into-teams) you can restrict access to a specific team like this:
```json
{
""plugins"": {
""datasette-auth-github"": {
""..."": ""..."",
""load_teams"": [
""your-organization/staff"",
""your-organization/engineering"",
]
}
},
""allows"": {
""gh_team"": ""your-organization/engineering""
}
}
```
## What to do if a user is removed from an organization or team
A user's organization and team memberships are checked once, when they first sign in. Those teams and organizations are then persisted in the user's signed `ds_actor` cookie.
This means that if a user is removed from an organization or team but still has a Datasette cookie, they will still be able to access that Datasette instance.
You can remedy this by rotating the `DATASETTE_SECRET` environment variable any time you make changes to your GitHub organization members.
Changing this value will cause all of your existing users to be signed out, by invalidating their cookies. When they sign back in again their new memberships will be recorded in a new cookie.
See [Configuring the secret](https://docs.datasette.io/en/stable/settings.html?highlight=secret#configuring-the-secret) in the Datasette documentation for more details.
","
datasette-auth-github
Datasette plugin that authenticates users against GitHub.
Note that hard-coding secrets in metadata.json is a bad idea as they will be visible to anyone who can navigate to /-/metadata. Instead, we use Datasette's mechanism for adding secret plugin configuration options.
By default anonymous users will still be able to interact with Datasette. If you wish all users to have to sign in with a GitHub account first, add this to your metadata.json:
The gh_orgs and gh_teams properties will only be present if you used load_teams or load_orgs, documented below.
Restricting access to specific users
You can use Datasette's permissions mechanism to specify which user or users are allowed to access your instance. Here's how to restrict access to just GitHub user simonw:
Note that GitHub allows users to change their username, and it is possible for other people to claim old usernames. If you are concerned that your users may change their usernames you can key the allow blocks against GitHub user IDs instead, which do not change:
{
""allow"": {
""gh_id"": ""9599""
}
}
Restricting access to specific GitHub organizations or teams
You can also restrict access to users who are members of a specific GitHub organization.
You'll need to configure the plugin to check if the user is a member of that organization when they first sign in. You can do that using the ""load_orgs"" plugin configuration option.
Then you can use ""allow"": {""gh_orgs"": [...]} to specify which organizations are allowed access.
What to do if a user is removed from an organization or team
A user's organization and team memberships are checked once, when they first sign in. Those teams and organizations are then persisted in the user's signed ds_actor cookie.
This means that if a user is removed from an organization or team but still has a Datasette cookie, they will still be able to access that Datasette instance.
You can remedy this by rotating the DATASETTE_SECRET environment variable any time you make changes to your GitHub organization members.
Changing this value will cause all of your existing users to be signed out, by invalidating their cookies. When they sign back in again their new memberships will be recorded in a new cookie.
",,,,,,
195145678,MDEwOlJlcG9zaXRvcnkxOTUxNDU2Nzg=,sqlite-diffable,simonw/sqlite-diffable,0,9599,https://github.com/simonw/sqlite-diffable,Tools for dumping/loading a SQLite database to diffable directory structure,0,2019-07-04T00:58:46Z,2022-07-12T17:00:19Z,2022-08-18T22:49:29Z,,30,42,42,Python,1,1,1,1,0,3,0,0,3,apache-2.0,"[""datasette-io"", ""datasette-tool"", ""sqlite""]",3,3,42,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,3,1,"# sqlite-diffable
[](https://pypi.org/project/sqlite-diffable/)
[](https://github.com/simonw/sqlite-diffable/releases)
[](https://github.com/simonw/sqlite-diffable/blob/main/LICENSE)
Tools for dumping/loading a SQLite database to diffable directory structure
## Installation
pip install sqlite-diffable
## Demo
The repository at [simonw/simonwillisonblog-backup](https://github.com/simonw/simonwillisonblog-backup) contains a backup of the database on my blog, https://simonwillison.net/ - created using this tool.
## Dumping a database
Given a SQLite database called `fixtures.db` containing a table `facetable`, the following will dump out that table to the `dump/` directory:
sqlite-diffable dump fixtures.db dump/ facetable
To dump out every table in that database, use `--all`:
sqlite-diffable dump fixtures.db dump/ --all
## Loading a database
To load a previously dumped database, run the following:
sqlite-diffable load restored.db dump/
This will show an error if any of the tables that are being restored already exist in the database file.
You can replace those tables (dropping them before restoring them) using the `--replace` option:
sqlite-diffable load restored.db dump/ --replace
## Converting to JSON objects
Table rows are stored in the `.ndjson` files as newline-delimited JSON arrays, like this:
```
[""a"", ""a"", ""a-a"", 63, null, 0.7364712141640124, ""$null""]
[""a"", ""b"", ""a-b"", 51, null, 0.6020187290499803, ""$null""]
```
Sometimes it can be more convenient to work with a list of JSON objects.
The `sqlite-diffable objects` command can read a `.ndjson` file and its accompanying `.metadata.json` file and output JSON objects to standard output:
sqlite-diffable objects fixtures.db dump/sortable.ndjson
The output of that command looks something like this:
```
{""pk1"": ""a"", ""pk2"": ""a"", ""content"": ""a-a"", ""sortable"": 63, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.7364712141640124, ""text"": ""$null""}
{""pk1"": ""a"", ""pk2"": ""b"", ""content"": ""a-b"", ""sortable"": 51, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.6020187290499803, ""text"": ""$null""}
```
Add `-o` to write that output to a file:
sqlite-diffable objects fixtures.db dump/sortable.ndjson -o output.txt
Add `--array` to output a JSON array of objects, as opposed to a newline-delimited file:
sqlite-diffable objects fixtures.db dump/sortable.ndjson --array
Output:
```
[
{""pk1"": ""a"", ""pk2"": ""a"", ""content"": ""a-a"", ""sortable"": 63, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.7364712141640124, ""text"": ""$null""},
{""pk1"": ""a"", ""pk2"": ""b"", ""content"": ""a-b"", ""sortable"": 51, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.6020187290499803, ""text"": ""$null""}
]
```
## Storage format
Each table is represented as two files. The first, `table_name.metadata.json`, contains metadata describing the structure of the table. For a table called `redirects_redirect` that file might look like this:
```json
{
""name"": ""redirects_redirect"",
""columns"": [
""id"",
""domain"",
""path"",
""target"",
""created""
],
""schema"": ""CREATE TABLE [redirects_redirect] (\n [id] INTEGER PRIMARY KEY,\n [domain] TEXT,\n [path] TEXT,\n [target] TEXT,\n [created] TEXT\n)""
}
```
It is an object with three keys: `name` is the name of the table, `columns` is an array of column strings and `schema` is the SQL schema text used for tha table.
The second file, `table_name.ndjson`, contains [newline-delimited JSON](http://ndjson.org/) for every row in the table. Each row is represented as a JSON array with items corresponding to each of the columns defined in the metadata.
That file for the `redirects_redirect.ndjson` table might look like this:
```
[1, ""feeds.simonwillison.net"", ""swn-everything"", ""https://simonwillison.net/atom/everything/"", ""2017-10-01T21:11:36.440537+00:00""]
[2, ""feeds.simonwillison.net"", ""swn-entries"", ""https://simonwillison.net/atom/entries/"", ""2017-10-01T21:12:32.478849+00:00""]
[3, ""feeds.simonwillison.net"", ""swn-links"", ""https://simonwillison.net/atom/links/"", ""2017-10-01T21:12:54.820729+00:00""]
```
","
sqlite-diffable
Tools for dumping/loading a SQLite database to diffable directory structure
Each table is represented as two files. The first, table_name.metadata.json, contains metadata describing the structure of the table. For a table called redirects_redirect that file might look like this:
It is an object with three keys: name is the name of the table, columns is an array of column strings and schema is the SQL schema text used for tha table.
The second file, table_name.ndjson, contains newline-delimited JSON for every row in the table. Each row is represented as a JSON array with items corresponding to each of the columns defined in the metadata.
That file for the redirects_redirect.ndjson table might look like this:
",1,public,0,,0,
195696804,MDEwOlJlcG9zaXRvcnkxOTU2OTY4MDQ=,datasette-cors,simonw/datasette-cors,0,9599,https://github.com/simonw/datasette-cors,Datasette plugin for configuring CORS headers,0,2019-07-07T21:03:11Z,2021-02-27T00:31:13Z,2019-07-11T04:40:57Z,,11,9,9,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,9,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,3,"# datasette-cors
[](https://pypi.org/project/datasette-cors/)
[](https://circleci.com/gh/simonw/datasette-cors)
[](https://github.com/simonw/datasette-cors/blob/master/LICENSE)
Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors
You can use this plugin to allow JavaScript running on a whitelisted set of domains to make `fetch()` calls to the JSON API provided by your Datasette instance.
## Installation
pip install datasette-cors
## Configuration
You need to add some configuration to your Datasette `metadata.json` file for this plugin to take effect.
To whitelist specific domains, use this:
```json
{
""plugins"": {
""datasette-cors"": {
""hosts"": [""https://www.example.com""]
}
}
}
```
You can also whitelist patterns like this:
```json
{
""plugins"": {
""datasette-cors"": {
""host_wildcards"": [""https://*.example.com""]
}
}
}
```
## Testing it
To test this plugin out, run it locally by saving one of the above examples as `metadata.json` and running this:
$ datasette --memory -m metadata.json
Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following:
```javascript
fetch(""http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29"").then(r => r.json()).then(console.log)
```
If the plugin is running correctly, you will see the JSON response output to the console.
","
You can use this plugin to allow JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance.
Installation
pip install datasette-cors
Configuration
You need to add some configuration to your Datasette metadata.json file for this plugin to take effect.
If the plugin is running correctly, you will see the JSON response output to the console.
",,,,,,
207630174,MDEwOlJlcG9zaXRvcnkyMDc2MzAxNzQ=,datasette-rure,simonw/datasette-rure,0,9599,https://github.com/simonw/datasette-rure,Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine,0,2019-09-10T18:09:33Z,2020-12-04T04:26:53Z,2019-09-11T22:59:38Z,,19,4,4,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""sqlite"", ""regular-expressions"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,4,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-rure
[](https://pypi.org/project/datasette-rure/)
[](https://circleci.com/gh/simonw/datasette-rure)
[](https://github.com/simonw/datasette-rure/blob/master/LICENSE)
Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine
Install this plugin in the same environment as Datasette to enable the `regexp()` SQL function.
$ pip install datasette-rure
The plugin is built on top of the [rure-python](https://github.com/davidblewett/rure-python) library by David Blewett.
## regexp() to test regular expressions
You can test if a value matches a regular expression like this:
select regexp('hi.*there', 'hi there')
-- returns 1
select regexp('not.*there', 'hi there')
-- returns 0
You can also use SQLite's custom syntax to run matches:
select 'hi there' REGEXP 'hi.*there'
-- returns 1
This means you can select rows based on regular expression matches - for example, to select every article where the title begins with an E or an F:
select * from articles where title REGEXP '^[EF]'
Try this out: [REGEXP interactive demo](https://datasette-rure-demo.datasette.io/24ways?sql=select+*+from+articles+where+title+REGEXP+%27%5E%5BEF%5D%27)
## regexp_match() to extract groups
You can extract captured subsets of a pattern using `regexp_match()`.
select regexp_match('.*( and .*)', title) as n from articles where n is not null
-- Returns the ' and X' component of any matching titles, e.g.
-- and Recognition
-- and Transitions Their Place
-- etc
This will return the first parenthesis match when called with two arguments. You can call it with three arguments to indicate which match you would like to extract:
select regexp_match('.*(and)(.*)', title, 2) as n from articles where n is not null
The function will return `null` for invalid inputs e.g. a pattern without capture groups.
Try this out: [regexp_match() interactive demo](https://datasette-rure-demo.datasette.io/24ways?sql=select+%27WHY+%27+%7C%7C+regexp_match%28%27Why+%28.*%29%27%2C+title%29+as+t+from+articles+where+t+is+not+null)
## regexp_matches() to extract multiple matches at once
The `regexp_matches()` function can be used to extract multiple patterns from a single string. The result is returned as a JSON array, which can then be further processed using SQLite's [JSON functions](https://www.sqlite.org/json1.html).
The first argument is a regular expression with named capture groups. The second argument is the string to be matched.
select regexp_matches(
'hello (?P\w+) the (?P\w+)',
'hello bob the dog, hello maggie the cat, hello tarquin the otter'
)
This will return a list of JSON objects, each one representing the named captures from the original regular expression:
[
{""name"": ""bob"", ""species"": ""dog""},
{""name"": ""maggie"", ""species"": ""cat""},
{""name"": ""tarquin"", ""species"": ""otter""}
]
Try this out: [regexp_matches() interactive demo](https://datasette-rure-demo.datasette.io/24ways?sql=select+regexp_matches%28%0D%0A++++%27hello+%28%3FP%3Cname%3E%5Cw%2B%29+the+%28%3FP%3Cspecies%3E%5Cw%2B%29%27%2C%0D%0A++++%27hello+bob+the+dog%2C+hello+maggie+the+cat%2C+hello+tarquin+the+otter%27%0D%0A%29)
","
datasette-rure
Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine
Install this plugin in the same environment as Datasette to enable the regexp() SQL function.
$ pip install datasette-rure
The plugin is built on top of the rure-python library by David Blewett.
regexp() to test regular expressions
You can test if a value matches a regular expression like this:
You can extract captured subsets of a pattern using regexp_match().
select regexp_match('.*( and .*)', title) as n from articles where n is not null
-- Returns the ' and X' component of any matching titles, e.g.
-- and Recognition
-- and Transitions Their Place
-- etc
This will return the first parenthesis match when called with two arguments. You can call it with three arguments to indicate which match you would like to extract:
select regexp_match('.*(and)(.*)', title, 2) as n from articles where n is not null
The function will return null for invalid inputs e.g. a pattern without capture groups.
regexp_matches() to extract multiple matches at once
The regexp_matches() function can be used to extract multiple patterns from a single string. The result is returned as a JSON array, which can then be further processed using SQLite's JSON functions.
The first argument is a regular expression with named capture groups. The second argument is the string to be matched.
select regexp_matches(
'hello (?P<name>\w+) the (?P<species>\w+)',
'hello bob the dog, hello maggie the cat, hello tarquin the otter'
)
This will return a list of JSON objects, each one representing the named captures from the original regular expression:
",,,,,,
209091256,MDEwOlJlcG9zaXRvcnkyMDkwOTEyNTY=,datasette-atom,simonw/datasette-atom,0,9599,https://github.com/simonw/datasette-atom,Datasette plugin that adds a .atom output format,0,2019-09-17T15:31:01Z,2021-03-26T02:06:51Z,2021-01-24T23:59:36Z,,47,10,10,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,10,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-atom
[](https://pypi.org/project/datasette-atom/)
[](https://github.com/simonw/datasette-atom/releases)
[](https://github.com/simonw/datasette-atom/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-atom/blob/main/LICENSE)
Datasette plugin that adds support for generating [Atom feeds](https://validator.w3.org/feed/docs/atom.html) with the results of a SQL query.
## Installation
Install this plugin in the same environment as Datasette to enable the `.atom` output extension.
$ pip install datasette-atom
## Usage
To create an Atom feed you need to define a custom SQL query that returns a required set of columns:
* `atom_id` - a unique ID for each row. [This article](https://web.archive.org/web/20080211143232/http://diveintomark.org/archives/2004/05/28/howto-atom-id) has suggestions about ways to create these IDs.
* `atom_title` - a title for that row.
* `atom_updated` - an [RFC 3339](http://www.faqs.org/rfcs/rfc3339.html) timestamp representing the last time the entry was modified in a significant way. This can usually be the time that the row was created.
The following columns are optional:
* `atom_content` - content that should be shown in the feed. This will be treated as a regular string, so any embedded HTML tags will be escaped when they are displayed.
* `atom_content_html` - content that should be shown in the feed. This will be treated as an HTML string, and will be sanitized using [Bleach](https://github.com/mozilla/bleach) to ensure it does not have any malicious code in it before being returned as part of a `` Atom element. If both are provided, this will be used in place of `atom_content`.
* `atom_link` - a URL that should be used as the link that the feed entry points to.
* `atom_author_name` - the name of the author of the entry. If you provide this you can also provide `atom_author_uri` and `atom_author_email` with a URL and e-mail address for that author.
A query that returns these columns can then be returned as an Atom feed by adding the `.atom` extension.
## Example
Here is an example SQL query which generates an Atom feed for new entries on [www.niche-museums.com](https://www.niche-museums.com/):
```sql
select
'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id,
name as atom_title,
created as atom_updated,
'https://www.niche-museums.com/browse/museums/' || id as atom_link,
coalesce(
'',
''
) || '
' || description || '
' as atom_content_html
from
museums
order by
created desc
limit
15
```
You can try this query by [pasting it in here](https://www.niche-museums.com/browse) - then click the `.atom` link to see it as an Atom feed.
## Using a canned query
Datasette's [canned query mechanism](https://docs.datasette.io/en/stable/sql_queries.html#canned-queries) is a useful way to configure feeds. If a canned query definition has a `title` that will be used as the title of the Atom feed.
Here's an example, defined using a `metadata.yaml` file:
```yaml
databases:
browse:
queries:
feed:
title: Niche Museums
sql: |-
select
'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id,
name as atom_title,
created as atom_updated,
'https://www.niche-museums.com/browse/museums/' || id as atom_link,
coalesce(
'',
''
) || '
' || description || '
' as atom_content_html
from
museums
order by
created desc
limit
15
```
## Disabling HTML filtering
The HTML allow-list used by Bleach for the `atom_content_html` column can be found in the `clean(html)` function at the bottom of [datasette_atom/__init__.py](https://github.com/simonw/datasette-atom/blob/main/datasette_atom/__init__.py).
You can disable Bleach entirely for Atom feeds generated using a canned query. You should only do this if you are certain that no user-provided HTML could be included in that value.
Here's how to do that in `metadata.json`:
```json
{
""plugins"": {
""datasette-atom"": {
""allow_unsafe_html_in_canned_queries"": true
}
}
}
```
Setting this to `true` will disable Bleach filtering for all canned queries across all databases.
You can disable Bleach filtering just for a specific list of canned queries like so:
```json
{
""plugins"": {
""datasette-atom"": {
""allow_unsafe_html_in_canned_queries"": {
""museums"": [""latest"", ""moderation""]
}
}
}
}
```
This will disable Bleach just for the canned queries called `latest` and `moderation` in the `museums.db` database.
","
datasette-atom
Datasette plugin that adds support for generating Atom feeds with the results of a SQL query.
Installation
Install this plugin in the same environment as Datasette to enable the .atom output extension.
$ pip install datasette-atom
Usage
To create an Atom feed you need to define a custom SQL query that returns a required set of columns:
atom_id - a unique ID for each row. This article has suggestions about ways to create these IDs.
atom_title - a title for that row.
atom_updated - an RFC 3339 timestamp representing the last time the entry was modified in a significant way. This can usually be the time that the row was created.
The following columns are optional:
atom_content - content that should be shown in the feed. This will be treated as a regular string, so any embedded HTML tags will be escaped when they are displayed.
atom_content_html - content that should be shown in the feed. This will be treated as an HTML string, and will be sanitized using Bleach to ensure it does not have any malicious code in it before being returned as part of a <content type=""html""> Atom element. If both are provided, this will be used in place of atom_content.
atom_link - a URL that should be used as the link that the feed entry points to.
atom_author_name - the name of the author of the entry. If you provide this you can also provide atom_author_uri and atom_author_email with a URL and e-mail address for that author.
A query that returns these columns can then be returned as an Atom feed by adding the .atom extension.
Example
Here is an example SQL query which generates an Atom feed for new entries on www.niche-museums.com:
select'tag:niche-museums.com,'|| substr(created, 0, 11) ||':'|| id as atom_id,
name as atom_title,
created as atom_updated,
'https://www.niche-museums.com/browse/museums/'|| id as atom_link,
coalesce(
'<img src=""'|| photo_url ||'?w=800&h=400&fit=crop&auto=compress"">',
''
) ||'<p>'|| description ||'</p>'as atom_content_html
from
museums
order by
created desclimit15
You can try this query by pasting it in here - then click the .atom link to see it as an Atom feed.
Using a canned query
Datasette's canned query mechanism is a useful way to configure feeds. If a canned query definition has a title that will be used as the title of the Atom feed.
Here's an example, defined using a metadata.yaml file:
databases:
browse:
queries:
feed:
title: Niche Museumssql: |- select 'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id, name as atom_title, created as atom_updated, 'https://www.niche-museums.com/browse/museums/' || id as atom_link, coalesce( '<img src=""' || photo_url || '?w=800&h=400&fit=crop&auto=compress"">', '' ) || '<p>' || description || '</p>' as atom_content_html from museums order by created desc limit 15
Disabling HTML filtering
The HTML allow-list used by Bleach for the atom_content_html column can be found in the clean(html) function at the bottom of datasette_atom/init.py.
You can disable Bleach entirely for Atom feeds generated using a canned query. You should only do this if you are certain that no user-provided HTML could be included in that value.
This will disable Bleach just for the canned queries called latest and moderation in the museums.db database.
",,,,,,
214299267,MDEwOlJlcG9zaXRvcnkyMTQyOTkyNjc=,datasette-render-timestamps,simonw/datasette-render-timestamps,0,9599,https://github.com/simonw/datasette-render-timestamps,Datasette plugin for rendering timestamps,0,2019-10-10T22:50:50Z,2020-10-17T11:09:42Z,2020-03-22T17:57:17Z,,17,4,4,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",1,0,4,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,2,"# datasette-render-timestamps
[](https://pypi.org/project/datasette-render-timestamps/)
[](https://circleci.com/gh/simonw/datasette-render-timestamps)
[](https://github.com/simonw/datasette-render-timestamps/blob/master/LICENSE)
Datasette plugin for rendering timestamps.
## Installation
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-render-timestamps
The plugin will then look out for integer numbers that are likely to be timestamps - anything that would be a number of seconds from 5 years ago to 5 years in the future.
These will then be rendered in a more readable format.
## Configuration
You can disable automatic column detection in favour of explicitly listing the columns that you would like to render using [plugin configuration](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file.
Add a `""datasette-render-timestamps""` configuration block and use a `""columns""` key to list the columns you would like to treat as timestamp values:
```json
{
""plugins"": {
""datasette-render-timestamps"": {
""columns"": [""created"", ""updated""]
}
}
}
```
This will cause any `created` or `updated` columns in any table to be treated as timestamps and rendered.
Save this to `metadata.json` and run datasette with the `--metadata` flag to load this configuration:
datasette serve mydata.db --metadata metadata.json
To disable automatic timestamp detection entirely, you can use `""columnns"": []`.
This configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the `entries` table in the `news.db` database:
```json
{
""databases"": {
""news"": {
""tables"": {
""entries"": {
""plugins"": {
""datasette-render-timestamps"": {
""columns"": [""created"", ""updated""]
}
}
}
}
}
}
}
```
And here's how to apply it to every `created` column in every table in the `news.db` database:
```json
{
""databases"": {
""news"": {
""plugins"": {
""datasette-render-timestamps"": {
""columns"": [""created"", ""updated""]
}
}
}
}
}
```
### Customizing the date format
The default format is `%B %d, %Y - %H:%M:%S UTC` which renders for example: `October 10, 2019 - 07:18:29 UTC`. If you want another format, the date format can be customized using plugin configuration. Any format string supported by [strftime](http://strftime.org/) may be used. For example:
```json
{
""plugins"": {
""datasette-render-timestamps"": {
""format"": ""%Y-%m-%d-%H:%M:%S""
}
}
}
```
","
datasette-render-timestamps
Datasette plugin for rendering timestamps.
Installation
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-render-timestamps
The plugin will then look out for integer numbers that are likely to be timestamps - anything that would be a number of seconds from 5 years ago to 5 years in the future.
These will then be rendered in a more readable format.
Configuration
You can disable automatic column detection in favour of explicitly listing the columns that you would like to render using plugin configuration in a metadata.json file.
Add a ""datasette-render-timestamps"" configuration block and use a ""columns"" key to list the columns you would like to treat as timestamp values:
To disable automatic timestamp detection entirely, you can use ""columnns"": [].
This configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the entries table in the news.db database:
The default format is %B %d, %Y - %H:%M:%S UTC which renders for example: October 10, 2019 - 07:18:29 UTC. If you want another format, the date format can be customized using plugin configuration. Any format string supported by strftime may be used. For example:
",,,,,,
217216787,MDEwOlJlcG9zaXRvcnkyMTcyMTY3ODc=,datasette-haversine,simonw/datasette-haversine,0,9599,https://github.com/simonw/datasette-haversine,Datasette plugin that adds a custom SQL function for haversine distances,0,2019-10-24T05:16:14Z,2021-07-28T20:13:38Z,2021-07-28T20:14:24Z,,8,1,1,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-haversine
[](https://pypi.org/project/datasette-haversine/)
[](https://github.com/simonw/datasette-haversine/releases)
[](https://github.com/simonw/datasette-haversine/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-haversine/blob/main/LICENSE)
Datasette plugin that adds a custom SQL function for haversine distances
Install this plugin in the same environment as Datasette to enable the `haversine()` SQL function.
$ pip install datasette-haversine
The plugin is built on top of the [haversine](https://github.com/mapado/haversine) library.
## haversine() to calculate distances
```sql
select haversine(lat1, lon1, lat2, lon2);
```
This will return the distance in kilometers between the point defined by `(lat1, lon1)` and the point defined by `(lat2, lon2)`.
## Custom units
By default `haversine()` returns results in km. You can pass an optional third argument to get results in a different unit:
- `ft` for feet
- `m` for meters
- `in` for inches
- `mi` for miles
- `nmi` for nautical miles
- `km` for kilometers (the default)
```sql
select haversine(lat1, lon1, lat2, lon2, 'mi');
```
","
datasette-haversine
Datasette plugin that adds a custom SQL function for haversine distances
Install this plugin in the same environment as Datasette to enable the haversine() SQL function.
$ pip install datasette-haversine
The plugin is built on top of the haversine library.
haversine() to calculate distances
select haversine(lat1, lon1, lat2, lon2);
This will return the distance in kilometers between the point defined by (lat1, lon1) and the point defined by (lat2, lon2).
Custom units
By default haversine() returns results in km. You can pass an optional third argument to get results in a different unit:
ft for feet
m for meters
in for inches
mi for miles
nmi for nautical miles
km for kilometers (the default)
select haversine(lat1, lon1, lat2, lon2, 'mi');
",,,,,,
219372133,MDEwOlJlcG9zaXRvcnkyMTkzNzIxMzM=,sqlite-transform,simonw/sqlite-transform,0,9599,https://github.com/simonw/sqlite-transform,Tool for running transformations on columns in a SQLite database,0,2019-11-03T22:07:53Z,2021-08-02T22:06:23Z,2021-08-02T22:07:57Z,,64,29,29,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""sqlite"", ""datasette-io"", ""datasette-tool""]",1,0,29,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,1,"# sqlite-transform

[](https://pypi.org/project/sqlite-transform/)
[](https://github.com/simonw/sqlite-transform/releases)
[](https://github.com/simonw/sqlite-transform/actions?query=workflow%3ATest)
[](https://github.com/dogsheep/sqlite-transform/blob/main/LICENSE)
Tool for running transformations on columns in a SQLite database.
> **:warning: This tool is no longer maintained**
>
> I added a new tool to [sqlite-utils](https://sqlite-utils.datasette.io/) called [sqlite-utils convert](https://sqlite-utils.datasette.io/en/stable/cli.html#converting-data-in-columns) which provides a super-set of the functionality originally provided here. `sqlite-transform` is no longer maintained, and I recommend switching to using `sqlite-utils convert` instead.
## How to install
pip install sqlite-transform
## parsedate and parsedatetime
These subcommands will run all values in the specified column through `dateutils.parser.parse()` and replace them with the result, formatted as an ISO timestamp or ISO date.
For example, if a row in the database has an `opened` column which contains `10/10/2019 08:10:00 PM`, running the following command:
sqlite-transform parsedatetime my.db mytable opened
Will result in that value being replaced by `2019-10-10T20:10:00`.
Using the `parsedate` subcommand here would result in `2019-10-10` instead.
In the case of ambiguous dates such as `03/04/05` these commands both default to assuming American-style `mm/dd/yy` format. You can pass `--dayfirst` to specify that the day should be assumed to be first, or `--yearfirst` for the year.
## jsonsplit
The `jsonsplit` subcommand takes columns that contain a comma-separated list, for example a `tags` column containing records like `""trees,park,dogs""` and converts it into a JSON array `[""trees"", ""park"", ""dogs""]`.
This is useful for taking advantage of Datasette's [Facet by JSON array](https://docs.datasette.io/en/stable/facets.html#facet-by-json-array) feature.
sqlite-transform jsonsplit my.db mytable tags
It defaults to splitting on commas, but you can specify a different delimiter character using the `--delimiter` option, for example:
sqlite-transform jsonsplit \
my.db mytable tags --delimiter ';'
Values within the array will be treated as strings, so a column containing `123,552,775` will be converted into the JSON array `[""123"", ""552"", ""775""]`.
You can specify a different type for these values using `--type int` or `--type float`, for example:
sqlite-transform jsonsplit \
my.db mytable tags --type int
This will result in that column being converted into `[123, 552, 775]`.
## lambda for executing your own code
The `lambda` subcommand lets you specify Python code which will be executed against the column.
Here's how to convert a column to uppercase:
sqlite-transform lambda my.db mytable mycolumn --code='str(value).upper()'
The code you provide will be compiled into a function that takes `value` as a single argument. You can break your function body into multiple lines, provided the last line is a `return` statement:
sqlite-transform lambda my.db mytable mycolumn --code='value = str(value)
return value.upper()'
You can also specify Python modules that should be imported and made available to your code using one or more `--import` options:
sqlite-transform lambda my.db mytable mycolumn \
--code='""\n"".join(textwrap.wrap(value, 10))' \
--import=textwrap
The `--dry-run` option will output a preview of the transformation against the first ten rows, without modifying the database.
## Saving the result to a separate column
Each of these commands accepts optional `--output` and `--output-type` options. These can be used to save the result of the transformation to a separate column, which will be created if the column does not already exist.
To save the result of `jsonsplit` to a new column called `json_tags`, use the following:
sqlite-transform jsonsplit my.db mytable tags \
--output json_tags
The type of the created column defaults to `text`, but a different column type can be specified using `--output-type`. This example will create a new floating point column called `float_id` with a copy of each item's ID increased by 0.5:
sqlite-transform lambda my.db mytable id \
--code 'float(value) + 0.5' \
--output float_id \
--output-type float
You can drop the original column at the end of the operation by adding `--drop`.
## Splitting a column into multiple columns
Sometimes you may wish to convert a single column into multiple derived columns. For example, you may have a `location` column containing `latitude,longitude` values which you wish to split out into separate `latitude` and `longitude` columns.
You can achieve this using the `--multi` option to `sqlite-transform lambda`. This option expects your `--code` function to return a Python dictionary: new columns well be created and populated for each of the keys in that dictionary.
For the `latitude,longitude` example you would use the following:
sqlite-transform lambda demo.db places location \
--code 'return {
""latitude"": float(value.split("","")[0]),
""longitude"": float(value.split("","")[1]),
}' --multi
The type of the returned values will be taken into account when creating the new columns. In this example, the resulting database schema will look like this:
```sql
CREATE TABLE [places] (
[location] TEXT,
[latitude] FLOAT,
[longitude] FLOAT
);
```
The code function can also return `None`, in which case its output will be ignored.
You can drop the original column at the end of the operation by adding `--drop`.
## Disabling the progress bar
By default each command will show a progress bar. Pass `-s` or `--silent` to hide that progress bar.
","
sqlite-transform
Tool for running transformations on columns in a SQLite database.
⚠️ This tool is no longer maintained
I added a new tool to sqlite-utils called sqlite-utils convert which provides a super-set of the functionality originally provided here. sqlite-transform is no longer maintained, and I recommend switching to using sqlite-utils convert instead.
How to install
pip install sqlite-transform
parsedate and parsedatetime
These subcommands will run all values in the specified column through dateutils.parser.parse() and replace them with the result, formatted as an ISO timestamp or ISO date.
For example, if a row in the database has an opened column which contains 10/10/2019 08:10:00 PM, running the following command:
sqlite-transform parsedatetime my.db mytable opened
Will result in that value being replaced by 2019-10-10T20:10:00.
Using the parsedate subcommand here would result in 2019-10-10 instead.
In the case of ambiguous dates such as 03/04/05 these commands both default to assuming American-style mm/dd/yy format. You can pass --dayfirst to specify that the day should be assumed to be first, or --yearfirst for the year.
jsonsplit
The jsonsplit subcommand takes columns that contain a comma-separated list, for example a tags column containing records like ""trees,park,dogs"" and converts it into a JSON array [""trees"", ""park"", ""dogs""].
This is useful for taking advantage of Datasette's Facet by JSON array feature.
sqlite-transform jsonsplit my.db mytable tags
It defaults to splitting on commas, but you can specify a different delimiter character using the --delimiter option, for example:
Values within the array will be treated as strings, so a column containing 123,552,775 will be converted into the JSON array [""123"", ""552"", ""775""].
You can specify a different type for these values using --type int or --type float, for example:
sqlite-transform jsonsplit \
my.db mytable tags --type int
This will result in that column being converted into [123, 552, 775].
lambda for executing your own code
The lambda subcommand lets you specify Python code which will be executed against the column.
The code you provide will be compiled into a function that takes value as a single argument. You can break your function body into multiple lines, provided the last line is a return statement:
The --dry-run option will output a preview of the transformation against the first ten rows, without modifying the database.
Saving the result to a separate column
Each of these commands accepts optional --output and --output-type options. These can be used to save the result of the transformation to a separate column, which will be created if the column does not already exist.
To save the result of jsonsplit to a new column called json_tags, use the following:
The type of the created column defaults to text, but a different column type can be specified using --output-type. This example will create a new floating point column called float_id with a copy of each item's ID increased by 0.5:
You can drop the original column at the end of the operation by adding --drop.
Splitting a column into multiple columns
Sometimes you may wish to convert a single column into multiple derived columns. For example, you may have a location column containing latitude,longitude values which you wish to split out into separate latitude and longitude columns.
You can achieve this using the --multi option to sqlite-transform lambda. This option expects your --code function to return a Python dictionary: new columns well be created and populated for each of the keys in that dictionary.
For the latitude,longitude example you would use the following:
The type of the returned values will be taken into account when creating the new columns. In this example, the resulting database schema will look like this:
The code function can also return None, in which case its output will be ignored.
You can drop the original column at the end of the operation by adding --drop.
Disabling the progress bar
By default each command will show a progress bar. Pass -s or --silent to hide that progress bar.
",,,,,,
220716822,MDEwOlJlcG9zaXRvcnkyMjA3MTY4MjI=,datasette-render-markdown,simonw/datasette-render-markdown,0,9599,https://github.com/simonw/datasette-render-markdown,Datasette plugin for rendering Markdown,0,2019-11-09T23:28:31Z,2022-05-26T04:58:56Z,2022-07-18T19:35:10Z,,57,11,11,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""markdown""]",0,1,11,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-render-markdown
[](https://pypi.org/project/datasette-render-markdown/)
[](https://github.com/simonw/datasette-render-markdown/releases)
[](https://github.com/simonw/datasette-render-markdown/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-render-markdown/blob/main/LICENSE)
Datasette plugin for rendering Markdown.
## Installation
Install this plugin in the same environment as Datasette to enable this new functionality:
$ pip install datasette-render-markdown
## Usage
You can explicitly list the columns you would like to treat as Markdown using [plugin configuration](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file.
Add a `""datasette-render-markdown""` configuration block and use a `""columns""` key to list the columns you would like to treat as Markdown values:
```json
{
""plugins"": {
""datasette-render-markdown"": {
""columns"": [""body""]
}
}
}
```
This will cause any `body` column in any table to be treated as markdown and safely rendered using [Python-Markdown](https://python-markdown.github.io/). The resulting HTML is then run through [Bleach](https://bleach.readthedocs.io/) to avoid the risk of XSS security problems.
Save this to `metadata.json` and run Datasette with the `--metadata` flag to load this configuration:
$ datasette serve mydata.db --metadata metadata.json
The configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the `entries` table in the `news.db` database:
```json
{
""databases"": {
""news"": {
""tables"": {
""entries"": {
""plugins"": {
""datasette-render-markdown"": {
""columns"": [""body""]
}
}
}
}
}
}
}
```
And here's how to apply it to every `body` column in every table in the `news.db` database:
```json
{
""databases"": {
""news"": {
""plugins"": {
""datasette-render-markdown"": {
""columns"": [""body""]
}
}
}
}
}
```
## Columns that match a naming convention
This plugin can also render markdown in any columns that match a specific naming convention.
By default, columns that have a name ending in `_markdown` will be rendered.
You can try this out using the following query:
```sql
select '# Hello there
* This is a list
* of items
[And a link](https://github.com/simonw/datasette-render-markdown).'
as demo_markdown
```
You can configure a different list of wildcard patterns using the `""patterns""` configuration key. Here's how to render columns that end in either `_markdown` or `_md`:
```json
{
""plugins"": {
""datasette-render-markdown"": {
""patterns"": [""*_markdown"", ""*_md""]
}
}
}
```
To disable wildcard column matching entirely, set `""patterns"": []` in your plugin metadata configuration.
## Markdown extensions
The [Python-Markdown library](https://python-markdown.github.io/) that powers this plugin supports extensions, both [bundled](https://python-markdown.github.io/extensions/) and [third-party](https://github.com/Python-Markdown/markdown/wiki/Third-Party-Extensions). These can be used to enable additional Markdown features such as [table support](https://python-markdown.github.io/extensions/tables/).
You can configure support for extensions using the `""extensions""` key in your plugin metadata configuration.
Since extensions may introduce new HTML tags, you will also need to add those tags to the list of tags that are allowed by the [Bleach](https://bleach.readthedocs.io/) sanitizer. You can do that using the `""extra_tags""` key, and you can whitelist additional HTML attributes using `""extra_attrs""`. See [the Bleach documentation](https://bleach.readthedocs.io/en/latest/clean.html#allowed-tags-tags) for more information on this.
Here's how to enable support for [Markdown tables](https://python-markdown.github.io/extensions/tables/):
```json
{
""plugins"": {
""datasette-render-markdown"": {
""extensions"": [""tables""],
""extra_tags"": [""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""]
}
}
}
```
### GitHub-Flavored Markdown
Enabling [GitHub-Flavored Markdown](https://help.github.com/en/github/writing-on-github) (useful for if you are working with data imported from GitHub using [github-to-sqlite](https://github.com/dogsheep/github-to-sqlite)) is a little more complicated.
First, you will need to install the [py-gfm](https://py-gfm.readthedocs.io) package:
$ pip install py-gfm
Note that `py-gfm` has [a bug](https://github.com/Zopieux/py-gfm/issues/13) that causes it to pin to `Markdown<3.0` - so if you are using it you should install it _before_ installing `datasette-render-markdown` to ensure you get a compatibly version of that dependency.
Now you can configure it like this. Note that the extension name is `mdx_gfm:GithubFlavoredMarkdownExtension` and you need to whitelist several extra HTML tags and attributes:
```json
{
""plugins"": {
""datasette-render-markdown"": {
""extra_tags"": [
""hr"",
""br"",
""details"",
""summary"",
""input""
],
""extra_attrs"": {
""input"": [
""type"",
""disabled"",
""checked""
],
},
""extensions"": [
""mdx_gfm:GithubFlavoredMarkdownExtension""
]
}
}
}
```
The `` attributes are needed to support rendering checkboxes in issue descriptions.
## Markdown in templates
The plugin also adds a new template function: `render_markdown(value)`. You can use this in your templates like so:
```html+jinja
{{ render_markdown(""""""
# This is markdown
* One
* Two
* Three
"""""") }}
```
You can load additional extensions and whitelist tags by passing extra arguments to the function like this:
```html+jinja
{{ render_markdown(""""""
## Markdown table
First Header | Second Header
------------- | -------------
Content Cell | Content Cell
Content Cell | Content Cell
"""""", extensions=[""tables""],
extra_tags=[""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""])) }}
```
","
datasette-render-markdown
Datasette plugin for rendering Markdown.
Installation
Install this plugin in the same environment as Datasette to enable this new functionality:
$ pip install datasette-render-markdown
Usage
You can explicitly list the columns you would like to treat as Markdown using plugin configuration in a metadata.json file.
Add a ""datasette-render-markdown"" configuration block and use a ""columns"" key to list the columns you would like to treat as Markdown values:
This will cause any body column in any table to be treated as markdown and safely rendered using Python-Markdown. The resulting HTML is then run through Bleach to avoid the risk of XSS security problems.
Save this to metadata.json and run Datasette with the --metadata flag to load this configuration:
The configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the entries table in the news.db database:
This plugin can also render markdown in any columns that match a specific naming convention.
By default, columns that have a name ending in _markdown will be rendered.
You can try this out using the following query:
select'# Hello there* This is a list* of items[And a link](https://github.com/simonw/datasette-render-markdown).'as demo_markdown
You can configure a different list of wildcard patterns using the ""patterns"" configuration key. Here's how to render columns that end in either _markdown or _md:
You can configure support for extensions using the ""extensions"" key in your plugin metadata configuration.
Since extensions may introduce new HTML tags, you will also need to add those tags to the list of tags that are allowed by the Bleach sanitizer. You can do that using the ""extra_tags"" key, and you can whitelist additional HTML attributes using ""extra_attrs"". See the Bleach documentation for more information on this.
First, you will need to install the py-gfm package:
$ pip install py-gfm
Note that py-gfm has a bug that causes it to pin to Markdown<3.0 - so if you are using it you should install it before installing datasette-render-markdown to ensure you get a compatibly version of that dependency.
Now you can configure it like this. Note that the extension name is mdx_gfm:GithubFlavoredMarkdownExtension and you need to whitelist several extra HTML tags and attributes:
",1,public,0,,0,
221802296,MDEwOlJlcG9zaXRvcnkyMjE4MDIyOTY=,datasette-template-sql,simonw/datasette-template-sql,0,9599,https://github.com/simonw/datasette-template-sql,Datasette plugin for executing SQL queries from templates,0,2019-11-14T23:05:34Z,2021-05-18T17:58:47Z,2021-05-18T17:58:44Z,https://datasette.io/plugins/datasette-template-sql,23,6,6,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,6,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-template-sql
[](https://pypi.org/project/datasette-template-sql/)
[](https://github.com/simonw/datasette-template-sql/releases)
[](https://github.com/simonw/datasette-template-sql/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-template-sql/blob/main/LICENSE)
Datasette plugin for executing SQL queries from templates.
## Examples
[datasette.io](https://datasette.io/) uses this plugin extensively with [custom page templates](https://docs.datasette.io/en/stable/custom_templates.html#custom-pages), check out [simonw/datasette.io](https://github.com/simonw/datasette.io) to see how it works.
[www.niche-museums.com](https://www.niche-museums.com/) uses this plugin to run a custom themed website on top of Datasette. The full source code for the site [is here](https://github.com/simonw/museums) - see also [niche-museums.com, powered by Datasette](https://simonwillison.net/2019/Nov/25/niche-museums/).
[simonw/til](https://github.com/simonw/til) is another simple example, described in [Using a self-rewriting README powered by GitHub Actions to track TILs](https://simonwillison.net/2020/Apr/20/self-rewriting-readme/).
## Installation
Run this command to install the plugin in the same environment as Datasette:
$ pip install datasette-template-sql
## Usage
This plugin makes a new function, `sql(sql_query)`, available to your Datasette templates.
You can use it like this:
```html+jinja
{% for row in sql(""select 1 + 1 as two, 2 * 4 as eight"") %}
{% for key in row.keys() %}
{{ key }}: {{ row[key] }}
{% endfor %}
{% endfor %}
```
The plugin will execute SQL against the current database for the page in `database.html`, `table.html` and `row.html` templates. If a template does not have a current database (`index.html` for example) the query will execute against the first attached database.
### Queries with arguments
You can construct a SQL query using `?` or `:name` parameter syntax by passing a list or dictionary as a second argument:
```html+jinja
{% for row in sql(""select distinct topic from til order by topic"") %}
{{ row.topic }}
{% for til in sql(""select * from til where topic = ?"", [row.topic]) %}
{% endfor %}
```
Here's the same example using the `:topic` style of parameters:
```html+jinja
{% for row in sql(""select distinct topic from til order by topic"") %}
{{ row.topic }}
{% for til in sql(""select * from til where topic = :topic"", {""topic"": row.topic}) %}
{% endfor %}
```
### Querying a different database
You can pass an optional `database=` argument to specify a named database to use for the query. For example, if you have attached a `news.db` database you could use this:
```html+jinja
{% for article in sql(
""select headline, date, summary from articles order by date desc limit 5"",
database=""news""
) %}
{{ article.headline }}
{{ article.date }}
{{ article.summary }}
{% endfor %}
```
","
datasette-template-sql
Datasette plugin for executing SQL queries from templates.
Run this command to install the plugin in the same environment as Datasette:
$ pip install datasette-template-sql
Usage
This plugin makes a new function, sql(sql_query), available to your Datasette templates.
You can use it like this:
{%forrowinsql(""select 1 + 1 as two, 2 * 4 as eight"") %}{%forkeyinrow.keys() %}
{{ key }}: {{ row[key] }}<br>
{%endfor%}{%endfor%}
The plugin will execute SQL against the current database for the page in database.html, table.html and row.html templates. If a template does not have a current database (index.html for example) the query will execute against the first attached database.
Queries with arguments
You can construct a SQL query using ? or :name parameter syntax by passing a list or dictionary as a second argument:
{%forrowinsql(""select distinct topic from til order by topic"") %}
<h2>{{ row.topic }}</h2>
<ul>
{%fortilinsql(""select * from til where topic = ?"", [row.topic]) %}
<li><ahref=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li>
{%endfor%}
</ul>
{%endfor%}
Here's the same example using the :topic style of parameters:
{%forrowinsql(""select distinct topic from til order by topic"") %}
<h2>{{ row.topic }}</h2>
<ul>
{%fortilinsql(""select * from til where topic = :topic"", {""topic"": row.topic}) %}
<li><ahref=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li>
{%endfor%}
</ul>
{%endfor%}
Querying a different database
You can pass an optional database= argument to specify a named database to use for the query. For example, if you have attached a news.db database you could use this:
{%forarticleinsql(
""select headline, date, summary from articles order by date desc limit 5"",
database=""news""
) %}
<h3>{{ article.headline }}</h2>
<pclass=""date"">{{ article.date }}</p>
<p>{{ article.summary }}</p>
{%endfor%}
",,,,,,
228485806,MDEwOlJlcG9zaXRvcnkyMjg0ODU4MDY=,datasette-configure-asgi,simonw/datasette-configure-asgi,0,9599,https://github.com/simonw/datasette-configure-asgi,Datasette plugin for configuring arbitrary ASGI middleware,0,2019-12-16T22:17:10Z,2020-08-25T15:54:32Z,2019-12-16T22:19:49Z,,6,1,1,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""asgi"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-configure-asgi
[](https://pypi.org/project/datasette-configure-asgi/)
[](https://circleci.com/gh/simonw/datasette-configure-asgi)
[](https://github.com/simonw/datasette-configure-asgi/blob/master/LICENSE)
Datasette plugin for configuring arbitrary ASGI middleware
## Installation
pip install datasette-configure-asgi
## Usage
This plugin only takes effect if your `metadata.json` file contains relevant top-level plugin configuration in a `""datasette-configure-asgi""` configuration key.
For example, to wrap your Datasette instance in the `asgi-log-to-sqlite` middleware configured to write logs to `/tmp/log.db` you would use the following:
```json
{
""plugins"": {
""datasette-configure-asgi"": [
{
""class"": ""asgi_log_to_sqlite.AsgiLogToSqlite"",
""args"": {
""file"": ""/tmp/log.db""
}
}
]
}
}
```
The `""datasette-configure-asgi""` key should be a list of JSON objects. Each object should have a `""class""` key indicating the class to be used, and an optional `""args""` key providing any necessary arguments to be passed to that class constructor.
## Plugin structure
This plugin can be used to wrap your Datasette instance in any ASGI middleware that conforms to the following structure:
```python
class SomeAsgiMiddleware:
def __init__(self, app, arg1, arg2):
self.app = app
self.arg1 = arg1
self.arg2 = arg2
async def __call__(self, scope, receive, send):
start = time.time()
await self.app(scope, receive, send)
end = time.time()
print(""Time taken: {}"".format(end - start))
```
So the middleware is a class with a constructor which takes the wrapped application as a first argument, `app`, followed by further named arguments to configure the middleware. It provides an `async def __call__(self, scope, receive, send)` method to implement the middleware's behavior.
","
datasette-configure-asgi
Datasette plugin for configuring arbitrary ASGI middleware
Installation
pip install datasette-configure-asgi
Usage
This plugin only takes effect if your metadata.json file contains relevant top-level plugin configuration in a ""datasette-configure-asgi"" configuration key.
For example, to wrap your Datasette instance in the asgi-log-to-sqlite middleware configured to write logs to /tmp/log.db you would use the following:
The ""datasette-configure-asgi"" key should be a list of JSON objects. Each object should have a ""class"" key indicating the class to be used, and an optional ""args"" key providing any necessary arguments to be passed to that class constructor.
Plugin structure
This plugin can be used to wrap your Datasette instance in any ASGI middleware that conforms to the following structure:
So the middleware is a class with a constructor which takes the wrapped application as a first argument, app, followed by further named arguments to configure the middleware. It provides an async def __call__(self, scope, receive, send) method to implement the middleware's behavior.
",,,,,,
234825790,MDEwOlJlcG9zaXRvcnkyMzQ4MjU3OTA=,datasette-upload-csvs,simonw/datasette-upload-csvs,0,9599,https://github.com/simonw/datasette-upload-csvs,Datasette plugin for uploading CSV files and converting them to database tables,0,2020-01-19T02:07:05Z,2022-07-03T20:58:20Z,2022-09-09T16:23:59Z,https://datasette.io/plugins/datasette-upload-csvs,58,9,9,Python,1,1,1,1,0,1,0,0,4,apache-2.0,"[""csvs"", ""datasette"", ""datasette-io"", ""datasette-plugin""]",1,4,9,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,1,2,"# datasette-upload-csvs
[](https://pypi.org/project/datasette-upload-csvs/)
[](https://github.com/simonw/datasette-upload-csvs/releases)
[](https://github.com/simonw/datasette-upload-csvs/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-upload-csvs/blob/main/LICENSE)
Datasette plugin for uploading CSV files and converting them to database tables
## Installation
datasette install datasette-upload-csvs
## Usage
The plugin adds an interface at `/-/upload-csvs` for uploading a CSV file and using it to create a new database table.
By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page.
The `upload-csvs` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface.
","
datasette-upload-csvs
Datasette plugin for uploading CSV files and converting them to database tables
Installation
datasette install datasette-upload-csvs
Usage
The plugin adds an interface at /-/upload-csvs for uploading a CSV file and using it to create a new database table.
By default only the root actor can access the page - so you'll need to run Datasette with the --root option and click on the link shown in the terminal to sign in and access the page.
The upload-csvs permission governs access. You can use permission plugins such as datasette-permissions-sql to grant additional access to the write interface.
",1,public,0,,0,
236110759,MDEwOlJlcG9zaXRvcnkyMzYxMTA3NTk=,datasette-auth-existing-cookies,simonw/datasette-auth-existing-cookies,0,9599,https://github.com/simonw/datasette-auth-existing-cookies,Datasette plugin that authenticates users based on existing domain cookies,0,2020-01-25T01:20:31Z,2022-05-28T01:50:15Z,2022-05-30T17:10:11Z,,54,3,3,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",1,0,3,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,1,3,"# datasette-auth-existing-cookies
[](https://pypi.org/project/datasette-auth-existing-cookies/)
[](https://github.com/simonw/datasette-auth-existing-cookies/releases)
[](https://github.com/simonw/datasette-auth-existing-cookies/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-auth-existing-cookies/blob/master/LICENSE)
Datasette plugin that authenticates users based on existing domain cookies.
## When to use this
This plugin allows you to build custom authentication for Datasette when you are hosting a Datasette instance on the same domain as another, authenticated website.
Consider a website on `www.example.com` which supports user authentication.
You could run Datasette on `data.example.com` in a way that lets it see cookies that were set for the `.example.com` domain.
Using this plugin, you could build an API endpoint at `www.example.com/user-for-cookies` which returns a JSON object representing the currently signed-in user, based on their cookies.
The plugin running on `data.example.com` will then make the `actor` available to the rest of Datasette based on the response from that API.
Read about [Datasette's authentication and permissions system](https://docs.datasette.io/en/stable/authentication.html) for more on how actors and permissions work.
## Configuration
This plugin requires some configuration in the Datasette [metadata.json file](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration).
The following configuration options are supported:
- `api_url`: this is the API endpoint that Datasette should call with the user's cookies in order to identify the logged in user.
- `cookies`: optional. A list of cookie names that should be passed through to the API endpoint - if left blank, the default is to send all cookies.
- `ttl`: optional. By default Datasette will make a request to the API endpoint for every HTTP request recieved by Datasette itself. A `ttl` value of 5 will cause Datasette to cache the actor associated with the user's cookies for 5 seconds, reducing that API traffic.
- `headers`: an optional list of other headers to forward to the API endpoint as query string parameters.
Here is an example that uses all four of these settings:
```json
{
""plugins"": {
""datasette-auth-existing-cookies"": {
""api_url"": ""http://www.example.com/user-from-cookies"",
""cookies"": [""sessionid""],
""headers"": [""host""],
""ttl"": 10
}
}
}
```
With this configuration any hit to a Datasette hosted at `data.example.com` will result in the following request being made to the `http://www.example.com/user-from-cookies` API endpoint:
```
GET http://www.example.com/user-from-cookies?host=data.example.com
Cookie: sessionid=abc123
```
That API is expected to return a JSON object representing the current user:
```json
{
""id"": 1,
""name"": ""Barry""
}
```
Since `ttl` is set to 10 that actor will be cached for ten seconds against that exact combination of cookies and headers. When that cache expires another hit will be made to the API.
When deciding on a TTL value, take into account that users who lose access to the core site - maybe because their session expires, or their account is disabled - will still be able to access the Datasette instance until that cache expires.
","
datasette-auth-existing-cookies
Datasette plugin that authenticates users based on existing domain cookies.
When to use this
This plugin allows you to build custom authentication for Datasette when you are hosting a Datasette instance on the same domain as another, authenticated website.
Consider a website on www.example.com which supports user authentication.
You could run Datasette on data.example.com in a way that lets it see cookies that were set for the .example.com domain.
Using this plugin, you could build an API endpoint at www.example.com/user-for-cookies which returns a JSON object representing the currently signed-in user, based on their cookies.
The plugin running on data.example.com will then make the actor available to the rest of Datasette based on the response from that API.
This plugin requires some configuration in the Datasette metadata.json file.
The following configuration options are supported:
api_url: this is the API endpoint that Datasette should call with the user's cookies in order to identify the logged in user.
cookies: optional. A list of cookie names that should be passed through to the API endpoint - if left blank, the default is to send all cookies.
ttl: optional. By default Datasette will make a request to the API endpoint for every HTTP request recieved by Datasette itself. A ttl value of 5 will cause Datasette to cache the actor associated with the user's cookies for 5 seconds, reducing that API traffic.
headers: an optional list of other headers to forward to the API endpoint as query string parameters.
Here is an example that uses all four of these settings:
With this configuration any hit to a Datasette hosted at data.example.com will result in the following request being made to the http://www.example.com/user-from-cookies API endpoint:
GET http://www.example.com/user-from-cookies?host=data.example.com
Cookie: sessionid=abc123
That API is expected to return a JSON object representing the current user:
{
""id"": 1,
""name"": ""Barry""
}
Since ttl is set to 10 that actor will be cached for ten seconds against that exact combination of cookies and headers. When that cache expires another hit will be made to the API.
When deciding on a TTL value, take into account that users who lose access to the core site - maybe because their session expires, or their account is disabled - will still be able to access the Datasette instance until that cache expires.
",1,public,0,,,
236867027,MDEwOlJlcG9zaXRvcnkyMzY4NjcwMjc=,datasette-sentry,simonw/datasette-sentry,0,9599,https://github.com/simonw/datasette-sentry,Datasette plugin for configuring Sentry,0,2020-01-28T23:41:27Z,2022-07-18T20:28:25Z,2022-10-06T22:31:29Z,,26,6,6,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""sentry""]",0,0,6,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-sentry
[](https://pypi.org/project/datasette-sentry/)
[](https://github.com/simonw/datasette-sentry/releases)
[](https://github.com/simonw/datasette-sentry/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-sentry/blob/main/LICENSE)
Datasette plugin for configuring Sentry for error reporting
## Installation
pip install datasette-sentry
## Usage
This plugin only takes effect if your `metadata.json` file contains relevant top-level plugin configuration in a `""datasette-sentry""` configuration key.
You will need a Sentry DSN - see their [Getting Started instructions](https://docs.sentry.io/error-reporting/quickstart/?platform=python).
Add it to `metadata.json` like this:
```json
{
""plugins"": {
""datasette-sentry"": {
""dsn"": ""https://KEY@sentry.io/PROJECTID""
}
}
}
```
Settings in `metadata.json` are visible to anyone who visits the `/-/metadata` URL so this is a good place to take advantage of Datasette's [secret configuration values](https://datasette.readthedocs.io/en/stable/plugins.html#secret-configuration-values), in which case your configuration will look more like this:
```json
{
""plugins"": {
""datasette-sentry"": {
""dsn"": {
""$env"": ""SENTRY_DSN""
}
}
}
}
```
Then make a `SENTRY_DSN` environment variable available to Datasette.
## Configuration
In addition to the `dsn` setting, you can also configure the Sentry [sample rate](https://docs.sentry.io/platforms/python/configuration/sampling/) by setting `sample_rate` to a floating point number between 0 and 1.
For example, to capture 25% of errors you would do this:
```json
{
""plugins"": {
""datasette-sentry"": {
""dsn"": {
""$env"": ""SENTRY_DSN""
},
""sample_rate"": 0.25
}
}
}
```
","
datasette-sentry
Datasette plugin for configuring Sentry for error reporting
Installation
pip install datasette-sentry
Usage
This plugin only takes effect if your metadata.json file contains relevant top-level plugin configuration in a ""datasette-sentry"" configuration key.
Settings in metadata.json are visible to anyone who visits the /-/metadata URL so this is a good place to take advantage of Datasette's secret configuration values, in which case your configuration will look more like this:
",1,public,0,,0,
237321267,MDEwOlJlcG9zaXRvcnkyMzczMjEyNjc=,geojson-to-sqlite,simonw/geojson-to-sqlite,0,9599,https://github.com/simonw/geojson-to-sqlite,CLI tool for converting GeoJSON files to SQLite (with SpatiaLite),0,2020-01-30T22:51:05Z,2022-03-05T00:40:56Z,2022-04-13T23:39:25Z,,117,34,34,Python,1,1,1,1,0,3,0,0,4,apache-2.0,"[""datasette-io"", ""datasette-tool"", ""geojson"", ""gis"", ""sqlite""]",3,4,34,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,3,3,"# geojson-to-sqlite
[](https://pypi.org/project/geojson-to-sqlite/)
[](https://github.com/simonw/geojson-to-sqlite/releases)
[](https://github.com/simonw/geojson-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/geojson-to-sqlite/blob/main/LICENSE)
CLI tool for converting GeoJSON to SQLite (optionally with SpatiaLite)
[RFC 7946: The GeoJSON Format](https://tools.ietf.org/html/rfc7946)
## How to install
$ pip install geojson-to-sqlite
## How to use
You can run this tool against a GeoJSON file like so:
$ geojson-to-sqlite my.db features features.geojson
This will load all of the features from the `features.geojson` file into a table called `features`.
Each row will have a `geometry` column containing the feature geometry, and columns for each of the keys found in any `properties` attached to those features. (To bundle all properties into a single JSON object, use the `--properties` flag.)
The table will be created the first time you run the command.
On subsequent runs you can use the `--alter` option to add any new columns that are missing from the table.
You can pass more than one GeoJSON file, in which case the contents of all of the files will be inserted into the same table.
If your features have an `""id""` property it will be used as the primary key for the table. You can also use `--pk=PROPERTY` with the name of a different property to use that as the primary key instead. If you don't want to use the `""id""` as the primary key (maybe it contains duplicate values) you can use `--pk ''` to specify no primary key.
Specifying a primary key also will allow you to upsert data into the rows instead of insert data into new rows.
If no primary key is specified, a SQLite `rowid` column will be used.
You can use `-` as the filename to import from standard input. For example:
$ curl https://eric.clst.org/assets/wiki/uploads/Stuff/gz_2010_us_040_00_20m.json \
| geojson-to-sqlite my.db states - --pk GEO_ID
## Using with SpatiaLite
By default, the `geometry` column will contain JSON.
If you have installed the [SpatiaLite](https://www.gaia-gis.it/fossil/libspatialite/index) module for SQLite you can instead import the geometry into a geospatially indexed column.
You can do this using the `--spatialite` option, like so:
$ geojson-to-sqlite my.db features features.geojson --spatialite
The tool will search for the SpatiaLite module in the following locations:
- `/usr/lib/x86_64-linux-gnu/mod_spatialite.so`
- `/usr/local/lib/mod_spatialite.dylib`
If you have installed the module in another location, you can use the `--spatialite_mod=xxx` option to specify where:
$ geojson-to-sqlite my.db features features.geojson \
--spatialite_mod=/usr/lib/mod_spatialite.dylib
You can create a SpatiaLite spatial index on the `geometry` column using the `--spatial-index` option:
$ geojson-to-sqlite my.db features features.geojson --spatial-index
Using this option implies `--spatialite` so you do not need to add that.
## Streaming large datasets
For large datasets, consider using newline-delimited JSON to stream features into the database without loading the entire feature collection into memory.
For example, to load a day of earthquake reports from USGS:
$ geojson-to-sqlite quakes.db quakes tests/quakes.ndjson \
--nl --pk=id --spatialite
When using newline-delimited JSON, tables will also be created from the first feature, instead of guessing types based on the first 100 features.
If you want to use a larger subset of your data to guess column types (for example, if some fields are inconsistent) you can use [fiona](https://fiona.readthedocs.io/en/latest/cli.html) to collect features into a single collection.
$ head tests/quakes.ndjson | fio collect | \
geojson-to-sqlite quakes.db quakes - --spatialite
This will take the first 10 lines from `tests/quakes.ndjson`, pass them to `fio collect`, which turns them into a single feature collection, and pass that, in turn, to `geojson-to-sqlite`.
## Using this with Datasette
Databases created using this tool can be explored and published using [Datasette](https://datasette.readthedocs.io/).
The Datasette documentation includes a section on [how to use it to browse SpatiaLite databases](https://datasette.readthedocs.io/en/stable/spatialite.html).
The [datasette-leaflet-geojson](https://datasette.io/plugins/datasette-leaflet-geojson) plugin can be used to visualize columns containing GeoJSON geometries on a [Leaflet](https://leafletjs.com/) map.
If you are using SpatiaLite you will need to output the geometry as GeoJSON in order for that plugin to work. You can do that using the SpaitaLite `AsGeoJSON()` function - something like this:
```sql
select rowid, AsGeoJSON(geometry) from mytable limit 10
```
The [datasette-geojson-map](https://datasette.io/plugins/datasette-geojson-map) is an alternative plugin which will automatically render SpatiaLite geometries as a Leaflet map on the corresponding table page, without needing you to call `AsGeoJSON(geometry)`.
","
geojson-to-sqlite
CLI tool for converting GeoJSON to SQLite (optionally with SpatiaLite)
You can run this tool against a GeoJSON file like so:
$ geojson-to-sqlite my.db features features.geojson
This will load all of the features from the features.geojson file into a table called features.
Each row will have a geometry column containing the feature geometry, and columns for each of the keys found in any properties attached to those features. (To bundle all properties into a single JSON object, use the --properties flag.)
The table will be created the first time you run the command.
On subsequent runs you can use the --alter option to add any new columns that are missing from the table.
You can pass more than one GeoJSON file, in which case the contents of all of the files will be inserted into the same table.
If your features have an ""id"" property it will be used as the primary key for the table. You can also use --pk=PROPERTY with the name of a different property to use that as the primary key instead. If you don't want to use the ""id"" as the primary key (maybe it contains duplicate values) you can use --pk '' to specify no primary key.
Specifying a primary key also will allow you to upsert data into the rows instead of insert data into new rows.
If no primary key is specified, a SQLite rowid column will be used.
You can use - as the filename to import from standard input. For example:
By default, the geometry column will contain JSON.
If you have installed the SpatiaLite module for SQLite you can instead import the geometry into a geospatially indexed column.
You can do this using the --spatialite option, like so:
$ geojson-to-sqlite my.db features features.geojson --spatialite
The tool will search for the SpatiaLite module in the following locations:
/usr/lib/x86_64-linux-gnu/mod_spatialite.so
/usr/local/lib/mod_spatialite.dylib
If you have installed the module in another location, you can use the --spatialite_mod=xxx option to specify where:
$ geojson-to-sqlite my.db features features.geojson \
--spatialite_mod=/usr/lib/mod_spatialite.dylib
You can create a SpatiaLite spatial index on the geometry column using the --spatial-index option:
$ geojson-to-sqlite my.db features features.geojson --spatial-index
Using this option implies --spatialite so you do not need to add that.
Streaming large datasets
For large datasets, consider using newline-delimited JSON to stream features into the database without loading the entire feature collection into memory.
For example, to load a day of earthquake reports from USGS:
When using newline-delimited JSON, tables will also be created from the first feature, instead of guessing types based on the first 100 features.
If you want to use a larger subset of your data to guess column types (for example, if some fields are inconsistent) you can use fiona to collect features into a single collection.
This will take the first 10 lines from tests/quakes.ndjson, pass them to fio collect, which turns them into a single feature collection, and pass that, in turn, to geojson-to-sqlite.
Using this with Datasette
Databases created using this tool can be explored and published using Datasette.
If you are using SpatiaLite you will need to output the geometry as GeoJSON in order for that plugin to work. You can do that using the SpaitaLite AsGeoJSON() function - something like this:
select rowid, AsGeoJSON(geometry) from mytable limit10
The datasette-geojson-map is an alternative plugin which will automatically render SpatiaLite geometries as a Leaflet map on the corresponding table page, without needing you to call AsGeoJSON(geometry).
",1,public,0,,,
238339412,MDEwOlJlcG9zaXRvcnkyMzgzMzk0MTI=,datasette-debug-asgi,simonw/datasette-debug-asgi,0,9599,https://github.com/simonw/datasette-debug-asgi,Datasette plugin for dumping out the ASGI scope,0,2020-02-05T00:57:09Z,2021-08-17T23:40:02Z,2021-08-17T23:41:03Z,https://datasette.io/plugins/datasette-debug-asgi,16,1,1,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""asgi"", ""datasette-io"", ""datasette-plugin""]",0,0,1,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-debug-asgi
[](https://pypi.org/project/datasette-debug-asgi/)
[](https://github.com/simonw/datasette-debug-asgi/releases)
[](https://github.com/simonw/datasette-debug-asgi/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-debug-asgi/blob/main/LICENSE)
Datasette plugin for dumping out the ASGI scope.
Adds a new URL at `/-/asgi-scope` which shows the current ASGI scope. Demo here: https://datasette.io/-/asgi-scope
## Installation
pip install datasette-debug-asgi
## Usage
Visit `/-/asgi-scope` to see debug output showing the ASGI scope.
You can add query string parameters such as `/-/asgi-scope?q=hello`.
You can also add extra path components such as `/-/asgi-scope/more/path/here`.
","
Visit /-/asgi-scope to see debug output showing the ASGI scope.
You can add query string parameters such as /-/asgi-scope?q=hello.
You can also add extra path components such as /-/asgi-scope/more/path/here.
",,,,,,
240815938,MDEwOlJlcG9zaXRvcnkyNDA4MTU5Mzg=,shapefile-to-sqlite,simonw/shapefile-to-sqlite,0,9599,https://github.com/simonw/shapefile-to-sqlite,Load shapefiles into a SQLite (optionally SpatiaLite) database,0,2020-02-16T01:55:29Z,2021-03-26T08:39:43Z,2020-08-23T06:00:41Z,,54,15,15,Python,1,1,1,1,0,0,0,0,3,apache-2.0,"[""sqlite"", ""gis"", ""spatialite"", ""shapefiles"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,3,15,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# shapefile-to-sqlite
[](https://pypi.org/project/shapefile-to-sqlite/)
[](https://circleci.com/gh/simonw/shapefile-to-sqlite)
[](https://github.com/simonw/shapefile-to-sqlite/blob/main/LICENSE)
Load shapefiles into a SQLite (optionally SpatiaLite) database.
Project background: [Things I learned about shapefiles building shapefile-to-sqlite](https://simonwillison.net/2020/Feb/19/shapefile-to-sqlite/)
## How to install
$ pip install shapefile-to-sqlite
## How to use
You can run this tool against a shapefile file like so:
$ shapefile-to-sqlite my.db features.shp
This will load the geometries as GeoJSON in a text column.
## Using with SpatiaLite
If you have [SpatiaLite](https://www.gaia-gis.it/fossil/libspatialite/index) available you can load them as SpatiaLite geometries like this:
$ shapefile-to-sqlite my.db features.shp --spatialite
The data will be loaded into a table called `features` - based on the name of the shapefile. You can specify an alternative table name using `--table`:
$ shapefile-to-sqlite my.db features.shp --table=places --spatialite
The tool will search for the SpatiaLite module in the following locations:
- `/usr/lib/x86_64-linux-gnu/mod_spatialite.so`
- `/usr/local/lib/mod_spatialite.dylib`
If you have installed the module in another location, you can use the `--spatialite_mod=xxx` option to specify where:
$ shapefile-to-sqlite my.db features.shp \
--spatialite_mod=/usr/lib/mod_spatialite.dylib
You can use the `--spatial-index` option to create a spatial index on the `geometry` column:
$ shapefile-to-sqlite my.db features.shp --spatial-index
You can omit `--spatialite` if you use either `--spatialite-mod` or `--spatial-index`.
## Projections
By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the [GeoJSON specification](https://tools.ietf.org/html/rfc7946).
If you want it to leave the data in whatever projection was used by the shapefile, use the `--crs=keep` option.
You can convert the data to another output projection by passing it to the `--crs` option. For example, to convert to [EPSG:2227](https://epsg.io/2227) (California zone 3) use `--crs=espg:2227`.
The full list of formats accepted by the `--crs` option is [documented here](https://pyproj4.github.io/pyproj/stable/api/crs.html#pyproj.crs.CRS.__init__).
## Extracting columns
If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the `-c` option:
$ shapefile-to-sqlite my.db features.shp -c agency
This will create a `agency` table with `id` and `name` columns, and will create the `agency` column in your main table as an integer foreign key reference to that table.
The `-c` option can be used multiple times.
[CPAD_2020a_Units](https://calands.datasettes.com/calands/CPAD_2020a_Units) is an example of a table created using the `-c` option.
","
shapefile-to-sqlite
Load shapefiles into a SQLite (optionally SpatiaLite) database.
You can omit --spatialite if you use either --spatialite-mod or --spatial-index.
Projections
By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the GeoJSON specification.
If you want it to leave the data in whatever projection was used by the shapefile, use the --crs=keep option.
You can convert the data to another output projection by passing it to the --crs option. For example, to convert to EPSG:2227 (California zone 3) use --crs=espg:2227.
The full list of formats accepted by the --crs option is documented here.
Extracting columns
If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the -c option:
This will create a agency table with id and name columns, and will create the agency column in your main table as an integer foreign key reference to that table.
The -c option can be used multiple times.
CPAD_2020a_Units is an example of a table created using the -c option.
",,,,,,
242260583,MDEwOlJlcG9zaXRvcnkyNDIyNjA1ODM=,datasette-mask-columns,simonw/datasette-mask-columns,0,9599,https://github.com/simonw/datasette-mask-columns,Datasette plugin that masks specified database columns,0,2020-02-22T01:29:16Z,2021-06-10T19:50:37Z,2021-06-10T19:51:02Z,https://datasette.io/plugins/datasette-mask-columns,15,2,2,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-mask-columns
[](https://pypi.org/project/datasette-mask-columns/)
[](https://github.com/simonw/datasette-mask-columns/releases)
[](https://github.com/simonw/datasette-mask-columns/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-mask-columns/blob/main/LICENSE)
Datasette plugin that masks specified database columns
## Installation
pip install datasette-mask-columns
This depends on plugin hook changes in a not-yet released branch of Datasette. See [issue #678](https://github.com/simonw/datasette/issues/678) for details.
## Usage
In your `metadata.json` file add a section like this describing the database and table in which you wish to mask columns:
```json
{
""databases"": {
""my-database"": {
""plugins"": {
""datasette-mask-columns"": {
""users"": [""password""]
}
}
}
}
}
```
All SQL queries against the `users` table in `my-database.db` will now return `null` for the `password` column, no matter what value that column actually holds.
The table page for `users` will display the text `REDACTED` in the masked column. This visual hint will only be available on the table page; it will not display his text for arbitrary queries against the table.
","
datasette-mask-columns
Datasette plugin that masks specified database columns
Installation
pip install datasette-mask-columns
This depends on plugin hook changes in a not-yet released branch of Datasette. See issue #678 for details.
Usage
In your metadata.json file add a section like this describing the database and table in which you wish to mask columns:
All SQL queries against the users table in my-database.db will now return null for the password column, no matter what value that column actually holds.
The table page for users will display the text REDACTED in the masked column. This visual hint will only be available on the table page; it will not display his text for arbitrary queries against the table.
",,,,,,
243710733,MDEwOlJlcG9zaXRvcnkyNDM3MTA3MzM=,datasette-ics,simonw/datasette-ics,0,9599,https://github.com/simonw/datasette-ics,Datasette plugin for outputting iCalendar files,0,2020-02-28T08:11:01Z,2022-07-07T14:11:49Z,2022-07-12T02:08:10Z,https://datasette.io/plugins/datasette-ics,34,13,13,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""icalendar"", ""ics""]",0,0,13,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-ics
[](https://pypi.org/project/datasette-ics/)
[](https://github.com/simonw/datasette-ics/releases)
[](https://github.com/simonw/datasette-ics/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-ics/blob/main/LICENSE)
Datasette plugin that adds support for generating [iCalendar .ics files](https://tools.ietf.org/html/rfc5545) with the results of a SQL query.
## Installation
Install this plugin in the same environment as Datasette to enable the `.ics` output extension.
$ pip install datasette-ics
## Usage
To create an iCalendar file you need to define a custom SQL query that returns a required set of columns:
* `event_name` - the short name for the event
* `event_dtstart` - when the event starts
The following columns are optional:
* `event_dtend` - when the event ends
* `event_duration` - the duration of the event (use instead of `dtend`)
* `event_description` - a longer description of the event
* `event_uid` - a globally unique identifier for this event
* `event_tzid` - the timezone for the event, e.g. `America/Chicago`
A query that returns these columns can then be returned as an ics feed by adding the `.ics` extension.
## Demo
[This SQL query]([https://www.rockybeaches.com/data?sql=with+inner+as+(%0D%0A++select%0D%0A++++datetime%2C%0D%0A++++substr(datetime%2C+0%2C+11)+as+date%2C%0D%0A++++mllw_feet%2C%0D%0A++++lag(mllw_feet)+over+win+as+previous_mllw_feet%2C%0D%0A++++lead(mllw_feet)+over+win+as+next_mllw_feet%0D%0A++from%0D%0A++++tide_predictions%0D%0A++where%0D%0A++++station_id+%3D+%3Astation_id%0D%0A++++and+datetime+%3E%3D+date()%0D%0A++++window+win+as+(%0D%0A++++++order+by%0D%0A++++++++datetime%0D%0A++++)%0D%0A++order+by%0D%0A++++datetime%0D%0A)%2C%0D%0Alowest_tide_per_day+as+(%0D%0A++select%0D%0A++++date%2C%0D%0A++++datetime%2C%0D%0A++++mllw_feet%0D%0A++from%0D%0A++++inner%0D%0A++where%0D%0A++++mllw_feet+%3C%3D+previous_mllw_feet%0D%0A++++and+mllw_feet+%3C%3D+next_mllw_feet%0D%0A)%0D%0Aselect%0D%0A++min(datetime)+as+event_dtstart%2C%0D%0A++%27Low+tide%3A+%27+||+mllw_feet+||+%27+feet%27+as+event_name%2C%0D%0A++%27America%2FLos_Angeles%27+as+event_tzid%0D%0Afrom%0D%0A++lowest_tide_per_day%0D%0Agroup+by%0D%0A++date%0D%0Aorder+by%0D%0A++date&station_id=9414131) calculates the lowest tide per day at Pillar Point in Half Moon Bay, California.
Since the query returns `event_name`, `event_dtstart` and `event_tzid` columns it produces [this ICS feed](https://www.rockybeaches.com/data.ics?sql=with+inner+as+(%0D%0A++select%0D%0A++++datetime%2C%0D%0A++++substr(datetime%2C+0%2C+11)+as+date%2C%0D%0A++++mllw_feet%2C%0D%0A++++lag(mllw_feet)+over+win+as+previous_mllw_feet%2C%0D%0A++++lead(mllw_feet)+over+win+as+next_mllw_feet%0D%0A++from%0D%0A++++tide_predictions%0D%0A++where%0D%0A++++station_id+%3D+%3Astation_id%0D%0A++++and+datetime+%3E%3D+date()%0D%0A++++window+win+as+(%0D%0A++++++order+by%0D%0A++++++++datetime%0D%0A++++)%0D%0A++order+by%0D%0A++++datetime%0D%0A)%2C%0D%0Alowest_tide_per_day+as+(%0D%0A++select%0D%0A++++date%2C%0D%0A++++datetime%2C%0D%0A++++mllw_feet%0D%0A++from%0D%0A++++inner%0D%0A++where%0D%0A++++mllw_feet+%3C%3D+previous_mllw_feet%0D%0A++++and+mllw_feet+%3C%3D+next_mllw_feet%0D%0A)%0D%0Aselect%0D%0A++min(datetime)+as+event_dtstart%2C%0D%0A++%27Low+tide%3A+%27+||+mllw_feet+||+%27+feet%27+as+event_name%2C%0D%0A++%27America%2FLos_Angeles%27+as+event_tzid%0D%0Afrom%0D%0A++lowest_tide_per_day%0D%0Agroup+by%0D%0A++date%0D%0Aorder+by%0D%0A++date&station_id=9414131). If you subscribe to that in a calendar application such as Apple Calendar you get something that looks like this:

## Using a canned query
Datasette's [canned query mechanism](https://datasette.readthedocs.io/en/stable/sql_queries.html#canned-queries) can be used to configure calendars. If a canned query definition has a `title` that will be used as the title of the calendar.
Here's an example, defined using a `metadata.yaml` file:
```yaml
databases:
mydatabase:
queries:
calendar:
title: My Calendar
sql: |-
select
title as event_name,
start as event_dtstart,
description as event_description
from
events
order by
start
limit
100
```
This will result in a calendar feed at `http://localhost:8001/mydatabase/calendar.ics`
","
datasette-ics
Datasette plugin that adds support for generating iCalendar .ics files with the results of a SQL query.
Installation
Install this plugin in the same environment as Datasette to enable the .ics output extension.
$ pip install datasette-ics
Usage
To create an iCalendar file you need to define a custom SQL query that returns a required set of columns:
event_name - the short name for the event
event_dtstart - when the event starts
The following columns are optional:
event_dtend - when the event ends
event_duration - the duration of the event (use instead of dtend)
event_description - a longer description of the event
event_uid - a globally unique identifier for this event
event_tzid - the timezone for the event, e.g. America/Chicago
A query that returns these columns can then be returned as an ics feed by adding the .ics extension.
Demo
This SQL query calculates the lowest tide per day at Pillar Point in Half Moon Bay, California.
Since the query returns event_name, event_dtstart and event_tzid columns it produces this ICS feed. If you subscribe to that in a calendar application such as Apple Calendar you get something that looks like this:
Using a canned query
Datasette's canned query mechanism can be used to configure calendars. If a canned query definition has a title that will be used as the title of the calendar.
Here's an example, defined using a metadata.yaml file:
databases:
mydatabase:
queries:
calendar:
title: My Calendarsql: |- select title as event_name, start as event_dtstart, description as event_description from events order by start limit 100
This will result in a calendar feed at http://localhost:8001/mydatabase/calendar.ics
",1,public,0,,0,
243887036,MDEwOlJlcG9zaXRvcnkyNDM4ODcwMzY=,datasette-configure-fts,simonw/datasette-configure-fts,0,9599,https://github.com/simonw/datasette-configure-fts,Datasette plugin for enabling full-text search against selected table columns,0,2020-02-29T01:50:57Z,2020-11-01T02:59:12Z,2020-11-01T02:59:10Z,,42,2,2,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,2,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-configure-fts
[](https://pypi.org/project/datasette-configure-fts/)
[](https://github.com/simonw/datasette-configure-fts/releases)
[](https://github.com/simonw/datasette-configure-fts/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-configure-fts/blob/main/LICENSE)
Datasette plugin for enabling full-text search against selected table columns
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-configure-fts
## Usage
Having installed the plugin, visit `/-/configure-fts` on your Datasette instance to configure FTS for tables on attached writable databases.
Any time you have permission to configure FTS for a table a menu item will appear in the table actions menu on the table page.
By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page.
The `configure-fts` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface.
","
datasette-configure-fts
Datasette plugin for enabling full-text search against selected table columns
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-configure-fts
Usage
Having installed the plugin, visit /-/configure-fts on your Datasette instance to configure FTS for tables on attached writable databases.
Any time you have permission to configure FTS for a table a menu item will appear in the table actions menu on the table page.
By default only the root actor can access the page - so you'll need to run Datasette with the --root option and click on the link shown in the terminal to sign in and access the page.
The configure-fts permission governs access. You can use permission plugins such as datasette-permissions-sql to grant additional access to the write interface.
",,,,,,
245670670,MDEwOlJlcG9zaXRvcnkyNDU2NzA2NzA=,fec-to-sqlite,simonw/fec-to-sqlite,0,9599,https://github.com/simonw/fec-to-sqlite,Save FEC campaign finance data to a SQLite database,0,2020-03-07T16:52:49Z,2020-12-19T05:09:05Z,2020-03-07T18:21:48Z,,16,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""sqlite"", ""fec"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,1,8,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# fec-to-sqlite
[](https://pypi.org/project/fec-to-sqlite/)
[](https://circleci.com/gh/simonw/fec-to-sqlite)
[](https://github.com/simonw/fec-to-sqlite/blob/master/LICENSE)
Create a SQLite database using FEC campaign contributions data.
This tool builds on [fecfile](https://github.com/esonderegger/) by Evan Sonderegger.
## How to install
$ pip install fec-to-sqlite
## Usage
$ fec-to-sqlite filings filings.db 1146148
This fetches the filing with ID `1146148` and stores it in tables in a SQLite database called `filings.db`. It will create any tables it needs.
You can pass more than one filing ID, separated by spaces.
","
fec-to-sqlite
Create a SQLite database using FEC campaign contributions data.
This fetches the filing with ID 1146148 and stores it in tables in a SQLite database called filings.db. It will create any tables it needs.
You can pass more than one filing ID, separated by spaces.
",,,,,,
245856731,MDEwOlJlcG9zaXRvcnkyNDU4NTY3MzE=,datasette-search-all,simonw/datasette-search-all,0,9599,https://github.com/simonw/datasette-search-all,Datasette plugin for searching all searchable tables at once,0,2020-03-08T17:21:54Z,2021-12-19T04:06:49Z,2022-10-05T01:53:33Z,,186,6,6,Python,1,1,1,1,0,2,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""search""]",2,0,6,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,2,2,"# datasette-search-all
[](https://pypi.org/project/datasette-search-all/)
[](https://github.com/simonw/datasette-search-all/releases)
[](https://github.com/simonw/datasette-search-all/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-search-all/blob/main/LICENSE)
Datasette plugin for searching all searchable tables at once.
## Installation
Install the plugin in the same Python environment as Datasette:
pip install datasette-search-all
## Background
See [datasette-search-all: a new plugin for searching multiple Datasette tables at once](https://simonwillison.net/2020/Mar/9/datasette-search-all/) for background on this project. You can try the plugin out at https://fara.datasettes.com/
## Usage
This plugin only works if at least one of the tables connected to your Datasette instance has been configured for SQLite's full-text search.
The [Datasette search documentation](https://docs.datasette.io/en/stable/full_text_search.html) includes details on how to enable full-text search for a table.
You can also use the following tools:
* [sqlite-utils](https://sqlite-utils.datasette.io/en/stable/cli.html#configuring-full-text-search) includes a command-line tool for enabling full-text search.
* [datasette-enable-fts](https://github.com/simonw/datasette-enable-fts) is a Datasette plugin that adds a web interface for enabling search for specific columns.
If the plugin detects at least one searchable table it will add a search form to the homepage.
You can also navigate to `/-/search` on your Datasette instance to use the search interface directly.
## Screenshot

","
datasette-search-all
Datasette plugin for searching all searchable tables at once.
Installation
Install the plugin in the same Python environment as Datasette:
sqlite-utils includes a command-line tool for enabling full-text search.
datasette-enable-fts is a Datasette plugin that adds a web interface for enabling search for specific columns.
If the plugin detects at least one searchable table it will add a search form to the homepage.
You can also navigate to /-/search on your Datasette instance to use the search interface directly.
Screenshot
",1,public,0,,0,
246108561,MDEwOlJlcG9zaXRvcnkyNDYxMDg1NjE=,datasette-column-inspect,simonw/datasette-column-inspect,0,9599,https://github.com/simonw/datasette-column-inspect,Experimental plugin that adds a column inspector,0,2020-03-09T18:11:00Z,2020-12-09T21:46:10Z,2020-12-09T21:47:38Z,,15,1,1,HTML,1,1,1,1,0,0,0,0,3,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,3,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-column-inspect
[](https://pypi.org/project/datasette-column-inspect/)
[](https://github.com/simonw/datasette-column-inspect/releases)
[](https://github.com/simonw/datasette-column-inspect/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-column-inspect/blob/main/LICENSE)
Highly experimental Datasette plugin for inspecting columns.
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-column-inspect
## Usage
This plugin adds an icon to each column on the table page which opens an inspection side panel.
","
datasette-column-inspect
Highly experimental Datasette plugin for inspecting columns.
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-column-inspect
Usage
This plugin adds an icon to each column on the table page which opens an inspection side panel.
",,,,,,
247527438,MDEwOlJlcG9zaXRvcnkyNDc1Mjc0Mzg=,datasette-edit-schema,simonw/datasette-edit-schema,0,9599,https://github.com/simonw/datasette-edit-schema,Datasette plugin for modifying table schemas,0,2020-03-15T18:34:06Z,2022-07-01T22:20:25Z,2022-08-22T22:45:58Z,,133,6,6,JavaScript,1,1,1,1,0,0,0,0,10,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,10,6,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-edit-schema
[](https://pypi.org/project/datasette-edit-schema/)
[](https://github.com/simonw/datasette-edit-schema/releases)
[](https://github.com/simonw/datasette-edit-schema/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-edit-schema/blob/master/LICENSE)
Datasette plugin for modifying table schemas
## Features
* Add new columns to a table
* Rename columns in a table
* Modify the type of columns in a table
* Re-order the columns in a table
* Rename a table
* Delete a table
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-edit-schema
## Usage
Navigate to `/-/edit-schema/dbname/tablename` on your Datasette instance to edit a specific table.
Use `/-/edit-schema/dbname` to create a new table in a specific database.
By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page.
## Permissions
The `edit-schema` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface.
These permission checks will call the `permission_allowed()` plugin hook with three arguments:
- `action` will be the string `""edit-schema""`
- `actor` will be the currently authenticated actor - usually a dictionary
- `resource` will be the string name of the database
## Screenshot

## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-edit-schema
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-edit-schema
Datasette plugin for modifying table schemas
Features
Add new columns to a table
Rename columns in a table
Modify the type of columns in a table
Re-order the columns in a table
Rename a table
Delete a table
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-edit-schema
Usage
Navigate to /-/edit-schema/dbname/tablename on your Datasette instance to edit a specific table.
Use /-/edit-schema/dbname to create a new table in a specific database.
By default only the root actor can access the page - so you'll need to run Datasette with the --root option and click on the link shown in the terminal to sign in and access the page.
Permissions
The edit-schema permission governs access. You can use permission plugins such as datasette-permissions-sql to grant additional access to the write interface.
These permission checks will call the permission_allowed() plugin hook with three arguments:
action will be the string ""edit-schema""
actor will be the currently authenticated actor - usually a dictionary
resource will be the string name of the database
Screenshot
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-edit-schema
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,0,
248385299,MDEwOlJlcG9zaXRvcnkyNDgzODUyOTk=,datasette-publish-fly,simonw/datasette-publish-fly,0,9599,https://github.com/simonw/datasette-publish-fly,Datasette plugin for publishing data using Fly,0,2020-03-19T01:47:01Z,2022-09-29T22:28:45Z,2022-09-29T17:25:15Z,,50,10,10,Python,1,1,1,1,0,3,0,0,4,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""fly""]",3,4,10,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,3,3,"# datasette-publish-fly
[](https://pypi.org/project/datasette-publish-fly/)
[](https://github.com/simonw/datasette-publish-fly/releases)
[](https://github.com/simonw/datasette-publish-fly/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-publish-fly/blob/main/LICENSE)
[Datasette](https://datasette.io/) plugin for deploying Datasette instances to [Fly.io](https://fly.io/).
Project background: [Using SQLite and Datasette with Fly Volumes](https://simonwillison.net/2022/Feb/15/fly-volumes/)
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-publish-fly
## Deploying read-only data
First, install the `flyctl` command-line tool by [following their instructions](https://fly.io/docs/getting-started/installing-flyctl/).
Run `flyctl auth signup` to create an account there, or `flyctl auth login` if you already have one.
You can now use `datasette publish fly` to publish one or more SQLite database files:
datasette publish fly my-database.db --app=""my-data-app""
The argument you pass to `--app` will be used for the URL of your application: `my-data-app.fly.dev`.
To update an application, run the publish command passing the same application name to the `--app` option.
Fly have [a free tier](https://fly.io/docs/about/pricing/#free-allowances), beyond which they will charge you monthly for each application you have live. Details of their pricing can be [found on their site](https://fly.io/docs/pricing/).
Your application will be deployed at `https://your-app-name.fly.io/` - be aware that it may take several minutes to start working the first time you deploy it.
## Using Fly volumes for writable databases
Fly [Volumes](https://fly.io/docs/reference/volumes/) provide persistant disk storage for Fly applications. Volumes can be 1GB or more in size and the Fly free tier includes 3GB of volume space.
Datasette plugins such as [datasette-uploads-csvs](https://datasette.io/plugins/datasette-upload-csvs) and [datasette-tiddlywiki](https://datasette.io/plugins/datasette-tiddlywiki) can be deployed to Fly and store their mutable data in a volume.
> :warning: **You should only run a single instance of your application** if your database accepts writes. Fly has excellent support for running multiple instances in different geographical regions, but `datasette-publish-fly` with volumes is not yet compatible with that model. You should probably [use Fly PostgreSQL instead](https://fly.io/blog/globally-distributed-postgres/).
Here's how to deploy `datasette-tiddlywiki` with authentication provided by `datasette-auth-passwords`.
First, you'll need to create a root password hash to use to sign into the instance.
You can do that by installing the plugin and running the `datasette hash-password` command, or by using [this hosted tool](https://datasette-auth-passwords-demo.datasette.io/-/password-tool).
The hash should look like `pbkdf2_sha256$...` - you'll need this for the next step.
In this example we're also deploying a read-only database called `content.db`.
Pick a name for your new application, then run the following:
datasette publish fly \
content.db \
--app your-application-name \
--create-volume 1 \
--create-db tiddlywiki \
--install datasette-auth-passwords \
--install datasette-tiddlywiki \
--plugin-secret datasette-auth-passwords root_password_hash 'pbkdf2_sha256$...'
This will create the new application, deploy the `content.db` read-only database, create a 1GB volume for that application, create a new database in that volume called `tiddlywiki.db`, then install the two plugins and configure the password you specified.
### Updating applications that use a volume
Once you have deployed an application using a volume, you can update that application without needing the `--create-volume` or `--create-db` options. To add the [datasette-graphq](https://datasette.io/plugins/datasette-graphql) plugin to your deployed application you would run the following:
datasette publish fly \
content.db \
--app your-application-name \
--install datasette-auth-passwords \
--install datasette-tiddlywiki \
--install datasette-graphql \
--plugin-secret datasette-auth-passwords root_password_hash 'pbkdf2_sha256$...' \
Since the application name is the same you don't need the `--create-volume` or `--create-db` options - these are persisted automatically between deploys.
You do need to specify the full list of plugins that you want to have installed, and any plugin secrets.
You also need to include any read-only database files that are part of the instance - `content.db` in this example - otherwise the new deployment will not include them.
### Advanced volume usage
`datasette publish fly` will add a volume called `datasette` to your Fly application. You can customize the name using the `--volume name custom_name` option.
Fly can be used to scale applications to run multiple instances in multiple regions around the world. This works well with read-only Datasette but is not currently recommended using Datasette with volumes, since each Fly replica would need its own volume and data stored in one instance would not be visible in others.
If you want to use multiple instances with volumes you will need to switch to using the `flyctl` command directly. The `--generate-dir` option, described below, can help with this.
## Generating without deploying
Use the `--generate-dir` option to generate a directory that can be deployed to Fly rather than deploying directly:
datasette publish fly my-database.db \
--app=""my-generated-app"" \
--generate-dir /tmp/deploy-this
You can then manually deploy your generated application using the following:
cd /tmp/deploy-this
flyctl apps create my-generated-app
flyctl deploy
## datasette publish fly --help
```
Usage: datasette publish fly [OPTIONS] [FILES]...
Deploy an application to Fly that runs Datasette against the provided database
files.
Usage example:
datasette publish fly my-database.db --app=""my-data-app""
Full documentation: https://datasette.io/plugins/datasette-publish-fly
Options:
-m, --metadata FILENAME Path to JSON/YAML file containing metadata to
publish
--extra-options TEXT Extra options to pass to datasette serve
--branch TEXT Install datasette from a GitHub branch e.g.
main
--template-dir DIRECTORY Path to directory containing custom templates
--plugins-dir DIRECTORY Path to directory containing custom plugins
--static MOUNT:DIRECTORY Serve static files from this directory at
/MOUNT/...
--install TEXT Additional packages (e.g. plugins) to install
--plugin-secret ...
Secrets to pass to plugins, e.g. --plugin-
secret datasette-auth-github client_id xxx
--version-note TEXT Additional note to show on /-/versions
--secret TEXT Secret used for signing secure values, such as
signed cookies
--title TEXT Title for metadata
--license TEXT License label for metadata
--license_url TEXT License URL for metadata
--source TEXT Source label for metadata
--source_url TEXT Source URL for metadata
--about TEXT About label for metadata
--about_url TEXT About URL for metadata
--spatialite Enable SpatialLite extension
--region TEXT Fly region to deploy to, e.g sjc - see
https://fly.io/docs/reference/regions/
--create-volume INTEGER RANGE Create and attach volume of this size in GB
[x>=1]
--create-db TEXT Names of read-write database files to create
--volume-name TEXT Volume name to use
-a, --app TEXT Name of Fly app to deploy [required]
-o, --org TEXT Name of Fly org to deploy to
--generate-dir DIRECTORY Output generated application files and stop
without deploying
--show-files Output the generated Dockerfile, metadata.json
and fly.toml
--help Show this message and exit.
```
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd datasette-publish-fly
python -m venv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
### Integration tests
The tests in `tests/test_integration.py` make actual calls to Fly to deploy a test application.
These tests are skipped by default. If you have `flyctl` installed and configured, you can run the integration tests like this:
pytest --integration -s
The `-s` option here ensures that output from the deploys will be visible to you - otherwise it can look like the tests have hung.
The tests will create applications on Fly that start with the prefix `publish-fly-temp-` and then delete them at the end of the run.
","
datasette-publish-fly
Datasette plugin for deploying Datasette instances to Fly.io.
The argument you pass to --app will be used for the URL of your application: my-data-app.fly.dev.
To update an application, run the publish command passing the same application name to the --app option.
Fly have a free tier, beyond which they will charge you monthly for each application you have live. Details of their pricing can be found on their site.
Your application will be deployed at https://your-app-name.fly.io/ - be aware that it may take several minutes to start working the first time you deploy it.
Using Fly volumes for writable databases
Fly Volumes provide persistant disk storage for Fly applications. Volumes can be 1GB or more in size and the Fly free tier includes 3GB of volume space.
⚠️You should only run a single instance of your application if your database accepts writes. Fly has excellent support for running multiple instances in different geographical regions, but datasette-publish-fly with volumes is not yet compatible with that model. You should probably use Fly PostgreSQL instead.
Here's how to deploy datasette-tiddlywiki with authentication provided by datasette-auth-passwords.
First, you'll need to create a root password hash to use to sign into the instance.
You can do that by installing the plugin and running the datasette hash-password command, or by using this hosted tool.
The hash should look like pbkdf2_sha256$... - you'll need this for the next step.
In this example we're also deploying a read-only database called content.db.
Pick a name for your new application, then run the following:
This will create the new application, deploy the content.db read-only database, create a 1GB volume for that application, create a new database in that volume called tiddlywiki.db, then install the two plugins and configure the password you specified.
Updating applications that use a volume
Once you have deployed an application using a volume, you can update that application without needing the --create-volume or --create-db options. To add the datasette-graphq plugin to your deployed application you would run the following:
Since the application name is the same you don't need the --create-volume or --create-db options - these are persisted automatically between deploys.
You do need to specify the full list of plugins that you want to have installed, and any plugin secrets.
You also need to include any read-only database files that are part of the instance - content.db in this example - otherwise the new deployment will not include them.
Advanced volume usage
datasette publish fly will add a volume called datasette to your Fly application. You can customize the name using the --volume name custom_name option.
Fly can be used to scale applications to run multiple instances in multiple regions around the world. This works well with read-only Datasette but is not currently recommended using Datasette with volumes, since each Fly replica would need its own volume and data stored in one instance would not be visible in others.
If you want to use multiple instances with volumes you will need to switch to using the flyctl command directly. The --generate-dir option, described below, can help with this.
Generating without deploying
Use the --generate-dir option to generate a directory that can be deployed to Fly rather than deploying directly:
You can then manually deploy your generated application using the following:
cd /tmp/deploy-this
flyctl apps create my-generated-app
flyctl deploy
datasette publish fly --help
Usage: datasette publish fly [OPTIONS] [FILES]...
Deploy an application to Fly that runs Datasette against the provided database
files.
Usage example:
datasette publish fly my-database.db --app=""my-data-app""
Full documentation: https://datasette.io/plugins/datasette-publish-fly
Options:
-m, --metadata FILENAME Path to JSON/YAML file containing metadata to
publish
--extra-options TEXT Extra options to pass to datasette serve
--branch TEXT Install datasette from a GitHub branch e.g.
main
--template-dir DIRECTORY Path to directory containing custom templates
--plugins-dir DIRECTORY Path to directory containing custom plugins
--static MOUNT:DIRECTORY Serve static files from this directory at
/MOUNT/...
--install TEXT Additional packages (e.g. plugins) to install
--plugin-secret <TEXT TEXT TEXT>...
Secrets to pass to plugins, e.g. --plugin-
secret datasette-auth-github client_id xxx
--version-note TEXT Additional note to show on /-/versions
--secret TEXT Secret used for signing secure values, such as
signed cookies
--title TEXT Title for metadata
--license TEXT License label for metadata
--license_url TEXT License URL for metadata
--source TEXT Source label for metadata
--source_url TEXT Source URL for metadata
--about TEXT About label for metadata
--about_url TEXT About URL for metadata
--spatialite Enable SpatialLite extension
--region TEXT Fly region to deploy to, e.g sjc - see
https://fly.io/docs/reference/regions/
--create-volume INTEGER RANGE Create and attach volume of this size in GB
[x>=1]
--create-db TEXT Names of read-write database files to create
--volume-name TEXT Volume name to use
-a, --app TEXT Name of Fly app to deploy [required]
-o, --org TEXT Name of Fly org to deploy to
--generate-dir DIRECTORY Output generated application files and stop
without deploying
--show-files Output the generated Dockerfile, metadata.json
and fly.toml
--help Show this message and exit.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd datasette-publish-fly
python -m venv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
Integration tests
The tests in tests/test_integration.py make actual calls to Fly to deploy a test application.
These tests are skipped by default. If you have flyctl installed and configured, you can run the integration tests like this:
pytest --integration -s
The -s option here ensures that output from the deploys will be visible to you - otherwise it can look like the tests have hung.
The tests will create applications on Fly that start with the prefix publish-fly-temp- and then delete them at the end of the run.
",1,public,0,,0,
248999994,MDEwOlJlcG9zaXRvcnkyNDg5OTk5OTQ=,datasette-show-errors,simonw/datasette-show-errors,0,9599,https://github.com/simonw/datasette-show-errors,Datasette plugin for displaying error tracebacks,0,2020-03-21T15:06:04Z,2020-09-24T00:17:29Z,2020-09-01T00:32:23Z,,7,1,1,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""asgi"", ""datasette"", ""starlette"", ""datasette-plugin"", ""datasette-io""]",0,1,1,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,0,"# datasette-show-errors
[](https://pypi.org/project/datasette-show-errors/)
[](https://circleci.com/gh/simonw/datasette-show-errors)
[](https://github.com/simonw/datasette-show-errors/blob/master/LICENSE)
Datasette plugin for displaying error tracebacks.
**This plugin does not work with current versions of Datasette.** See [issue #2](https://github.com/simonw/datasette-show-errors/issues/2).
## Installation
pip install datasette-show-errors
## Usage
Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page.
Be careful not to use this in a context that might expose sensitive information.
","
datasette-show-errors
Datasette plugin for displaying error tracebacks.
This plugin does not work with current versions of Datasette. See issue #2.
Installation
pip install datasette-show-errors
Usage
Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page.
Be careful not to use this in a context that might expose sensitive information.
",,,,,,
253632948,MDEwOlJlcG9zaXRvcnkyNTM2MzI5NDg=,datasette-publish-vercel,simonw/datasette-publish-vercel,0,9599,https://github.com/simonw/datasette-publish-vercel,Datasette plugin for publishing data using Vercel,0,2020-04-06T22:47:13Z,2022-07-29T17:09:47Z,2022-08-24T17:43:41Z,,55,27,27,Python,1,1,1,1,0,5,0,0,17,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""vercel"", ""zeit-now""]",5,17,27,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,5,4,"# datasette-publish-vercel
[](https://pypi.org/project/datasette-publish-vercel/)
[](https://github.com/simonw/datasette-publish-vercel/releases)
[](https://github.com/simonw/datasette-publish-vercel/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-publish-vercel/blob/main/LICENSE)
Datasette plugin for publishing data using [Vercel](https://vercel.com/).
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-publish-vercel
## Usage
First, install the Vercel CLI tool by [following their instructions](https://vercel.com/download).
Run `vercel login` to login to (or create) an account.
Now you can use `datasette publish vercel` to publish your data:
datasette publish vercel my-database.db --project=my-database
The `--project` argument is required - it specifies the project name that should be used for your deployment. This will be used as part of the deployment's URL.
### Other options
* `--no-prod` deploys to the project without updating the ""production"" URL alias to point to that new deployment. Without that option all deploys go directly to production.
* `--debug` enables the Vercel CLI debug output.
* `--token` allows you to pass a Now authentication token, rather than needing to first run `now login` to configure the tool. Tokens can be created in the Vercel web dashboard under Account Settings -> Tokens.
* `--public` runs `vercel --public` to publish the application source code at `/_src` e.g. https://datasette-public.now.sh/_src and make recent logs visible at `/_logs` e.g. https://datasette-public.now.sh/_logs
* `--generate-dir` - by default this tool generates a new Vercel app in a temporary directory, deploys it and then deletes the directory. Use `--generate-dir=my-app` to output the generated application files to a new directory of your choice instead. You can then deploy it by running `vercel` in that directory.
* `--setting default_page_size 10` - use this to set Datasette settings, as described in [the documentation](https://docs.datasette.io/en/stable/settings.html). This is a replacement for the unsupported `--extra-options` option.
### Full help
**Warning:** Some of these options are not yet implemented by this plugin. In particular, the following do not yet work:
* `--extra-options` - use `--setting` described above instead.
* `--plugin-secret`
* `--version-note`
```
$ datasette publish vercel --help
Usage: datasette publish vercel [OPTIONS] [FILES]...
Publish to https://vercel.com/
Options:
-m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish
--extra-options TEXT Extra options to pass to datasette serve
--branch TEXT Install datasette from a GitHub branch e.g. main
--template-dir DIRECTORY Path to directory containing custom templates
--plugins-dir DIRECTORY Path to directory containing custom plugins
--static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/...
--install TEXT Additional packages (e.g. plugins) to install
--plugin-secret ...
Secrets to pass to plugins, e.g. --plugin-secret
datasette-auth-github client_id xxx
--version-note TEXT Additional note to show on /-/versions
--secret TEXT Secret used for signing secure values, such as signed
cookies
--title TEXT Title for metadata
--license TEXT License label for metadata
--license_url TEXT License URL for metadata
--source TEXT Source label for metadata
--source_url TEXT Source URL for metadata
--about TEXT About label for metadata
--about_url TEXT About URL for metadata
--token TEXT Auth token to use for deploy
--project PROJECT Vercel project name to use [required]
--scope TEXT Optional Vercel scope (e.g. a team name)
--no-prod Don't deploy directly to production
--debug Enable Vercel CLI debug output
--public Publish source with Vercel CLI --public
--generate-dir DIRECTORY Output generated application files and stop without
deploying
--generate-vercel-json Output generated vercel.json file and stop without
deploying
--vercel-json FILENAME Custom vercel.json file to use instead of generating
one
--setting SETTING... Setting, see docs.datasette.io/en/stable/settings.html
--crossdb Enable cross-database SQL queries
--help Show this message and exit.
```
## Using a custom `vercel.json` file
If you want to add additional redirects or similar to your Vercel configuration you may want to provide a custom `vercel.json` file.
To do this, first generate a configuration file (without running a deploy) using the `--generate-vercel-json` option:
datasette publish vercel my-database.db \
--project=my-database \
--generate-vercel-json > vercel.json
You can now edit the `vercel.json` file that this creates to add your custom options.
Then run the deploy using:
datasette publish vercel my-database.db \
--project=my-database \
--vercel-json=vercel.json
## Setting a `DATASETTE_SECRET`
Datasette uses [a secret string](https://docs.datasette.io/en/stable/settings.html#configuring-the-secret) for purposes such as signing authentication cookies. This secret is reset when the server restarts, which will sign out any users who are authenticated using a signed cookie.
You can avoid this by generating a `DATASETTE_SECRET` secret string and setting that as a [Vercel environment variable](https://vercel.com/docs/concepts/projects/environment-variables). If you do this the secret will stay consistent and your users will not be signed out.
## Using this with GitHub Actions
This plugin can be used together with [GitHub Actions](https://github.com/features/actions) to deploy Datasette instances automatically on new pushes to a repo, or on a schedule.
The GitHub Actions runners already have the Vercel deployment tool installed. You'll need to create an API token for your account at [vercel.com/account/tokens](https://vercel.com/account/tokens), and store that as a secret in your GitHub repository called `VERCEL_TOKEN`.
Make sure your workflow has installed `datasette` and `datasette-publish-vercel` using `pip`, then add the following step to your GitHub Actions workflow:
```
- name: Deploy Datasette using Vercel
env:
VERCEL_TOKEN: ${{ secrets.VERCEL_TOKEN }}
run: |-
datasette publish vercel mydb.db \
--token $VERCEL_TOKEN \
--project my-vercel-project
```
You can see a full example of a workflow that uses Vercel in this way [in the simonw/til repository](https://github.com/simonw/til/blob/12b3f0d3679320cbeafa5df164bbc08ba703625d/.github/workflows/build.yml).
","
datasette-publish-vercel
Datasette plugin for publishing data using Vercel.
Installation
Install this plugin in the same environment as Datasette.
The --project argument is required - it specifies the project name that should be used for your deployment. This will be used as part of the deployment's URL.
Other options
--no-prod deploys to the project without updating the ""production"" URL alias to point to that new deployment. Without that option all deploys go directly to production.
--debug enables the Vercel CLI debug output.
--token allows you to pass a Now authentication token, rather than needing to first run now login to configure the tool. Tokens can be created in the Vercel web dashboard under Account Settings -> Tokens.
--generate-dir - by default this tool generates a new Vercel app in a temporary directory, deploys it and then deletes the directory. Use --generate-dir=my-app to output the generated application files to a new directory of your choice instead. You can then deploy it by running vercel in that directory.
--setting default_page_size 10 - use this to set Datasette settings, as described in the documentation. This is a replacement for the unsupported --extra-options option.
Full help
Warning: Some of these options are not yet implemented by this plugin. In particular, the following do not yet work:
--extra-options - use --setting described above instead.
--plugin-secret
--version-note
$ datasette publish vercel --help
Usage: datasette publish vercel [OPTIONS] [FILES]...
Publish to https://vercel.com/
Options:
-m, --metadata FILENAME Path to JSON/YAML file containing metadata to publish
--extra-options TEXT Extra options to pass to datasette serve
--branch TEXT Install datasette from a GitHub branch e.g. main
--template-dir DIRECTORY Path to directory containing custom templates
--plugins-dir DIRECTORY Path to directory containing custom plugins
--static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/...
--install TEXT Additional packages (e.g. plugins) to install
--plugin-secret <TEXT TEXT TEXT>...
Secrets to pass to plugins, e.g. --plugin-secret
datasette-auth-github client_id xxx
--version-note TEXT Additional note to show on /-/versions
--secret TEXT Secret used for signing secure values, such as signed
cookies
--title TEXT Title for metadata
--license TEXT License label for metadata
--license_url TEXT License URL for metadata
--source TEXT Source label for metadata
--source_url TEXT Source URL for metadata
--about TEXT About label for metadata
--about_url TEXT About URL for metadata
--token TEXT Auth token to use for deploy
--project PROJECT Vercel project name to use [required]
--scope TEXT Optional Vercel scope (e.g. a team name)
--no-prod Don't deploy directly to production
--debug Enable Vercel CLI debug output
--public Publish source with Vercel CLI --public
--generate-dir DIRECTORY Output generated application files and stop without
deploying
--generate-vercel-json Output generated vercel.json file and stop without
deploying
--vercel-json FILENAME Custom vercel.json file to use instead of generating
one
--setting SETTING... Setting, see docs.datasette.io/en/stable/settings.html
--crossdb Enable cross-database SQL queries
--help Show this message and exit.
Using a custom vercel.json file
If you want to add additional redirects or similar to your Vercel configuration you may want to provide a custom vercel.json file.
To do this, first generate a configuration file (without running a deploy) using the --generate-vercel-json option:
Datasette uses a secret string for purposes such as signing authentication cookies. This secret is reset when the server restarts, which will sign out any users who are authenticated using a signed cookie.
You can avoid this by generating a DATASETTE_SECRET secret string and setting that as a Vercel environment variable. If you do this the secret will stay consistent and your users will not be signed out.
Using this with GitHub Actions
This plugin can be used together with GitHub Actions to deploy Datasette instances automatically on new pushes to a repo, or on a schedule.
The GitHub Actions runners already have the Vercel deployment tool installed. You'll need to create an API token for your account at vercel.com/account/tokens, and store that as a secret in your GitHub repository called VERCEL_TOKEN.
Make sure your workflow has installed datasette and datasette-publish-vercel using pip, then add the following step to your GitHub Actions workflow:
",1,public,0,,0,
255460347,MDEwOlJlcG9zaXRvcnkyNTU0NjAzNDc=,datasette-clone,simonw/datasette-clone,0,9599,https://github.com/simonw/datasette-clone,Create a local copy of database files from a Datasette instance,0,2020-04-13T23:05:41Z,2021-06-08T15:33:21Z,2021-02-22T19:32:36Z,,20,2,2,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-tool""]",0,0,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-clone
[](https://pypi.org/project/datasette-clone/)
[](https://github.com/simonw/datasette-clone/releases)
[](https://github.com/simonw/datasette-clone/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-clone/blob/main/LICENSE)
Create a local copy of database files from a Datasette instance.
See [datasette-clone](https://simonwillison.net/2020/Apr/14/datasette-clone/) on my blog for background on this project.
## How to install
$ pip install datasette-clone
## Usage
This only works against Datasette instances running immutable databases (with the `-i` option). Databases published using the `datasette publish` command should be compatible with this tool.
To download copies of all `.db` files from an instance, run:
datasette-clone https://latest.datasette.io
You can provide an optional second argument to specify a directory:
datasette-clone https://latest.datasette.io /tmp/here-please
The command stores its own copy of a `databases.json` manifest and uses it to only download databases that have changed the next time you run the command.
It also stores a copy of the instance's `metadata.json` to ensure you have a copy of any source and licensing information for the downloaded databases.
If your instance is protected by an API token, you can use `--token` to provide it:
datasette-clone https://latest.datasette.io --token=xyz
For verbose output showing what the tool is doing, use `-v`.
","
datasette-clone
Create a local copy of database files from a Datasette instance.
See datasette-clone on my blog for background on this project.
How to install
$ pip install datasette-clone
Usage
This only works against Datasette instances running immutable databases (with the -i option). Databases published using the datasette publish command should be compatible with this tool.
To download copies of all .db files from an instance, run:
datasette-clone https://latest.datasette.io
You can provide an optional second argument to specify a directory:
The command stores its own copy of a databases.json manifest and uses it to only download databases that have changed the next time you run the command.
It also stores a copy of the instance's metadata.json to ensure you have a copy of any source and licensing information for the downloaded databases.
If your instance is protected by an API token, you can use --token to provide it:
For verbose output showing what the tool is doing, use -v.
",,,,,,
261634807,MDEwOlJlcG9zaXRvcnkyNjE2MzQ4MDc=,datasette-media,simonw/datasette-media,0,9599,https://github.com/simonw/datasette-media,Datasette plugin for serving media based on a SQL query,0,2020-05-06T02:42:57Z,2021-05-03T05:04:39Z,2020-07-30T23:39:29Z,,70,11,11,Python,1,1,1,1,0,0,0,0,8,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,8,11,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-media
[](https://pypi.org/project/datasette-media/)
[](https://github.com/simonw/datasette-media/releases)
[](https://circleci.com/gh/simonw/datasette-media)
[](https://github.com/simonw/datasette-media/blob/master/LICENSE)
Datasette plugin for serving media based on a SQL query.
Use this when you have a database table containing references to files on disk - or binary content stored in BLOB columns - that you would like to be able to serve to your users.
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-media
### HEIC image support
Modern iPhones save their photos using the [HEIC image format](https://en.wikipedia.org/wiki/High_Efficiency_Image_File_Format). Processing these images requires an additional dependency, [pyheif](https://pypi.org/project/pyheif/). You can include this dependency by running:
$ pip install datasette-media[heif]
## Usage
You can use this plugin to configure Datasette to serve static media based on SQL queries to an underlying database table.
Media will be served from URLs that start with `/-/media/`. The full URL to each media asset will look like this:
/-/media/type-of-media/media-key
`type-of-media` will correspond to a configured SQL query, and might be something like `photo`. `media-key` will be an identifier that is used as part of the underlying SQL query to find which file should be served.
### Serving static files from disk
The following ``metadata.json`` configuration will cause this plugin to serve files from disk, based on queries to a database table called `apple_photos`.
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath from apple_photos where uuid=:key""
}
}
}
}
```
A request to `/-/media/photo/CF972D33-5324-44F2-8DAE-22CB3182CD31` will execute the following SQL query:
```sql
select filepath from apple_photos where uuid=:key
```
The value from the URL - in this case `CF972D33-5324-44F2-8DAE-22CB3182CD31` - will be passed as the `:key` parameter to the query.
The query returns a `filepath` value that has been read from the table. The plugin will then read that file from disk and serve it in response to the request.
SQL queries default to running against the first connected database. You can specify a different database to execute the query against using `""database"": ""name_of_db""`. To execute against `photos.db`, use this:
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath from apple_photos where uuid=:key"",
""database"": ""photos""
}
}
}
}
```
See [dogsheep-photos](https://github.com/dogsheep/dogsheep-photos) for an example of an application that can benefit from this plugin.
### Serving binary content from BLOB columns
If your SQL query returns a `content` column, this will be served directly to the user:
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select thumbnail as content from photos where uuid=:key"",
""database"": ""thumbs""
}
}
}
}
```
You can also return a `content_type` column which will be used as the `Content-Type` header served to the user:
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select body as content, 'text/html;charset=utf-8' as content_type from documents where id=:key"",
""database"": ""documents""
}
}
}
}
```
If you do not specify a `content_type` the default of `application/octet-stream` will be used.
### Serving content proxied from a URL
To serve content that is itself fetched from elsewhere, return a `content_url` column. This can be particularly useful when combined with the ability to resize images (described in the next section).
```json
{
""plugins"": {
""datasette-media"": {
""photos"": {
""sql"": ""select photo_url as content_url from photos where id=:key"",
""database"": ""photos"",
""enable_transform"": true
}
}
}
}
```
Now you can access resized versions of images from that URL like so:
/-/media/photos/13?w=200
### Setting a download file name
The `content_filename` column can be returned to force browsers to download the content using a specific file name.
```json
{
""plugins"": {
""datasette-media"": {
""hello"": {
""sql"": ""select 'Hello ' || :key as content, 'hello.txt' as content_filename""
}
}
}
}
```
Visiting `/-/media/hello/Groot` will cause your browser to download a file called `hello.txt` containing the text `Hello Groot`.
### Resizing or transforming images
Your SQL query can specify that an image should be resized and/or converted to another format by returning additional columns. All three are optional.
* `resize_width` - the width to resize the image to
* `resize_width` - the height to resize the image to
* `output_format` - the output format to use (e.g. `jpeg` or `png`) - any output format [supported by Pillow](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) is allowed here.
If you specify one but not the other of `resize_width` or `resize_height` the unspecified one will be calculated automatically to maintain the aspect ratio of the image.
Here's an example configuration that will resize all images to be JPEGs that are 200 pixels in height:
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath, 200 as resize_height, 'jpeg' as output_format from apple_photos where uuid=:key"",
""database"": ""photos""
}
}
}
}
```
If you enable the `enable_transform` configuration option you can instead specify transform parameters at runtime using querystring parameters. For example:
- `/-/media/photo/CF972D33?w=200` to resize to a fixed width
- `/-/media/photo/CF972D33?h=200` to resize to a fixed height
- `/-/media/photo/CF972D33?format=jpeg` to convert to JPEG
That option is added like so:
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath from apple_photos where uuid=:key"",
""database"": ""photos"",
""enable_transform"": true
}
}
}
}
```
The maximum allowed height or width is 4000 pixels. You can change this limit using the `""max_width_height""` option:
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath from apple_photos where uuid=:key"",
""database"": ""photos"",
""enable_transform"": true,
""max_width_height"": 1000
}
}
}
}
```
## Configuration
In addition to the different named content types, the following special plugin configuration setting is available:
- `transform_threads` - number of threads to use for running transformations (e.g. resizing). Defaults to 4.
This can be used like this:
```json
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath from apple_photos where uuid=:key"",
""database"": ""photos""
},
""transform_threads"": 8
}
}
}
```
","
datasette-media
Datasette plugin for serving media based on a SQL query.
Use this when you have a database table containing references to files on disk - or binary content stored in BLOB columns - that you would like to be able to serve to your users.
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-media
HEIC image support
Modern iPhones save their photos using the HEIC image format. Processing these images requires an additional dependency, pyheif. You can include this dependency by running:
$ pip install datasette-media[heif]
Usage
You can use this plugin to configure Datasette to serve static media based on SQL queries to an underlying database table.
Media will be served from URLs that start with /-/media/. The full URL to each media asset will look like this:
/-/media/type-of-media/media-key
type-of-media will correspond to a configured SQL query, and might be something like photo. media-key will be an identifier that is used as part of the underlying SQL query to find which file should be served.
Serving static files from disk
The following metadata.json configuration will cause this plugin to serve files from disk, based on queries to a database table called apple_photos.
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath from apple_photos where uuid=:key""
}
}
}
}
A request to /-/media/photo/CF972D33-5324-44F2-8DAE-22CB3182CD31 will execute the following SQL query:
select filepath from apple_photos where uuid=:key
The value from the URL - in this case CF972D33-5324-44F2-8DAE-22CB3182CD31 - will be passed as the :key parameter to the query.
The query returns a filepath value that has been read from the table. The plugin will then read that file from disk and serve it in response to the request.
SQL queries default to running against the first connected database. You can specify a different database to execute the query against using ""database"": ""name_of_db"". To execute against photos.db, use this:
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath from apple_photos where uuid=:key"",
""database"": ""photos""
}
}
}
}
See dogsheep-photos for an example of an application that can benefit from this plugin.
Serving binary content from BLOB columns
If your SQL query returns a content column, this will be served directly to the user:
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select thumbnail as content from photos where uuid=:key"",
""database"": ""thumbs""
}
}
}
}
You can also return a content_type column which will be used as the Content-Type header served to the user:
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select body as content, 'text/html;charset=utf-8' as content_type from documents where id=:key"",
""database"": ""documents""
}
}
}
}
If you do not specify a content_type the default of application/octet-stream will be used.
Serving content proxied from a URL
To serve content that is itself fetched from elsewhere, return a content_url column. This can be particularly useful when combined with the ability to resize images (described in the next section).
{
""plugins"": {
""datasette-media"": {
""photos"": {
""sql"": ""select photo_url as content_url from photos where id=:key"",
""database"": ""photos"",
""enable_transform"": true
}
}
}
}
Now you can access resized versions of images from that URL like so:
/-/media/photos/13?w=200
Setting a download file name
The content_filename column can be returned to force browsers to download the content using a specific file name.
Visiting /-/media/hello/Groot will cause your browser to download a file called hello.txt containing the text Hello Groot.
Resizing or transforming images
Your SQL query can specify that an image should be resized and/or converted to another format by returning additional columns. All three are optional.
resize_width - the width to resize the image to
resize_width - the height to resize the image to
output_format - the output format to use (e.g. jpeg or png) - any output format supported by Pillow is allowed here.
If you specify one but not the other of resize_width or resize_height the unspecified one will be calculated automatically to maintain the aspect ratio of the image.
Here's an example configuration that will resize all images to be JPEGs that are 200 pixels in height:
{
""plugins"": {
""datasette-media"": {
""photo"": {
""sql"": ""select filepath, 200 as resize_height, 'jpeg' as output_format from apple_photos where uuid=:key"",
""database"": ""photos""
}
}
}
}
If you enable the enable_transform configuration option you can instead specify transform parameters at runtime using querystring parameters. For example:
/-/media/photo/CF972D33?w=200 to resize to a fixed width
/-/media/photo/CF972D33?h=200 to resize to a fixed height
/-/media/photo/CF972D33?format=jpeg to convert to JPEG
",,,,,,
271408895,MDEwOlJlcG9zaXRvcnkyNzE0MDg4OTU=,datasette-permissions-sql,simonw/datasette-permissions-sql,0,9599,https://github.com/simonw/datasette-permissions-sql,Datasette plugin for configuring permission checks using SQL queries,0,2020-06-10T23:48:13Z,2020-06-12T07:06:12Z,2020-06-12T07:06:15Z,,25,0,0,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,0,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-permissions-sql
[](https://pypi.org/project/datasette-permissions-sql/)
[](https://circleci.com/gh/simonw/datasette-permissions-sql)
[](https://github.com/simonw/datasette-permissions-sql/blob/master/LICENSE)
Datasette plugin for configuring permission checks using SQL queries
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-permissions-sql
## Usage
First, read up on how Datasette's [authentication and permissions system](https://datasette.readthedocs.io/en/latest/authentication.html) works.
This plugin lets you define rules containing SQL queries that are executed to see if the currently authenticated actor has permission to perform certain actions.
Consider a canned query which authenticated users should only be able to execute if a row in the `users` table says that they are a member of staff.
That `users` table in the `mydatabase.db` database could look like this:
| id | username | is_staff |
|--|--------|--------|
| 1 | cleopaws | 0 |
| 2 | simon | 1 |
Authenticated users have an `actor` that looks like this:
```json
{
""id"": 2,
""username"": ""simon""
}
```
To configure the canned query to only be executable by staff users, add the following to your `metadata.json`:
```json
{
""plugins"": {
""datasette-permissions-sql"": [
{
""action"": ""view-query"",
""resource"": [""mydatabase"", ""promote_to_staff""],
""sql"": ""SELECT * FROM users WHERE is_staff = 1 AND id = :actor_id""
}
]
},
""databases"": {
""mydatabase"": {
""queries"": {
""promote_to_staff"": {
""sql"": ""UPDATE users SET is is_staff=1 WHERE id=:id"",
""write"": true
}
}
}
}
}
```
The `""datasette-permissions-sql""` key is a list of rules. Each of those rules has the following shape:
```json
{
""action"": ""name-of-action"",
""resource"": [""resource identifier to run this on""],
""sql"": ""SQL query to execute"",
""database"": ""mydatabase""
}
```
Both `""action""` and `""resource""` are optional. If present, the SQL query will only be executed on permission checks that match the action and, if present, the resource indicators.
`""database""` is also optional: it specifies the named database that the query should be executed against. If it is not present the first connected database will be used.
The Datasette documentation includes a [list of built-in permissions](https://datasette.readthedocs.io/en/stable/authentication.html#built-in-permissions) that you might want to use here.
### The SQL query
If the SQL query returns any rows the action will be allowed. If it returns no rows, the plugin hook will return `False` and deny access to that action.
The SQL query is called with a number of named parameters. You can use any of these as part of the query.
The list of parameters is as follows:
* `action` - the action, e.g. `""view-database""`
* `resource_1` - the first component of the resource, if one was passed
* `resource_2` - the second component of the resource, if available
* `actor_*` - a parameter for every key on the actor. Usually `actor_id` is present.
If any rows are returned, the permission check passes. If no rows are returned the check fails.
Another example table, this time granting explicit access to individual tables. Consider a table called `table_access` that looks like this:
| user_id | database | table |
| - | - | - |
| 1 | mydb | dogs |
| 2 | mydb | dogs |
| 1 | mydb | cats |
The following SQL query would grant access to the `dogs` ttable in the `mydb.db` database to users 1 and 2 - but would forbid access for user 2 to the `cats` table:
```sql
SELECT
*
FROM
table_access
WHERE
user_id = :actor_id
AND ""database"" = :resource_1
AND ""table"" = :resource_2
```
In a `metadata.yaml` configuration file that would look like this:
```yaml
databases:
mydb:
allow_sql: {}
plugins:
datasette-permissions-sql:
- action: view-table
sql: |-
SELECT
*
FROM
table_access
WHERE
user_id = :actor_id
AND ""database"" = :resource_1
AND ""table"" = :resource_2
```
We're using `allow_sql: {}` here to disable arbitrary SQL queries. This prevents users from running `select * from cats` directly to work around the permissions limits.
### Fallback mode
The default behaviour of this plugin is to take full control of specified permissions. The SQL query will directly control if the user is allowed or denied access to the permission.
This means that the default policy for each permission (which in Datasette core is ""allow"" for `view-database` and friends) will be ignored. It also means that any other `permission_allowed` plugins will not get their turn once this plugin has executed.
You can change this on a per-rule basis using ``""fallback"": true``:
```json
{
""action"": ""view-table"",
""resource"": [""mydatabase"", ""mytable""],
""sql"": ""select * from admins where user_id = :actor_id"",
""fallback"": true
}
```
When running in fallback mode, a query result returning no rows will cause the plugin hook to return ``None`` - which means ""I have no opinion on this permission, fall back to other plugins or the default"".
In this mode you can still return `False` (for ""deny access"") by returning a single row with a single value of `-1`. For example:
```json
{
""action"": ""view-table"",
""resource"": [""mydatabase"", ""mytable""],
""sql"": ""select -1 from banned where user_id = :actor_id"",
""fallback"": true
}
```
","
datasette-permissions-sql
Datasette plugin for configuring permission checks using SQL queries
Installation
Install this plugin in the same environment as Datasette.
This plugin lets you define rules containing SQL queries that are executed to see if the currently authenticated actor has permission to perform certain actions.
Consider a canned query which authenticated users should only be able to execute if a row in the users table says that they are a member of staff.
That users table in the mydatabase.db database could look like this:
id
username
is_staff
1
cleopaws
0
2
simon
1
Authenticated users have an actor that looks like this:
{
""id"": 2,
""username"": ""simon""
}
To configure the canned query to only be executable by staff users, add the following to your metadata.json:
{
""plugins"": {
""datasette-permissions-sql"": [
{
""action"": ""view-query"",
""resource"": [""mydatabase"", ""promote_to_staff""],
""sql"": ""SELECT * FROM users WHERE is_staff = 1 AND id = :actor_id""
}
]
},
""databases"": {
""mydatabase"": {
""queries"": {
""promote_to_staff"": {
""sql"": ""UPDATE users SET is is_staff=1 WHERE id=:id"",
""write"": true
}
}
}
}
}
The ""datasette-permissions-sql"" key is a list of rules. Each of those rules has the following shape:
{
""action"": ""name-of-action"",
""resource"": [""resource identifier to run this on""],
""sql"": ""SQL query to execute"",
""database"": ""mydatabase""
}
Both ""action"" and ""resource"" are optional. If present, the SQL query will only be executed on permission checks that match the action and, if present, the resource indicators.
""database"" is also optional: it specifies the named database that the query should be executed against. If it is not present the first connected database will be used.
If the SQL query returns any rows the action will be allowed. If it returns no rows, the plugin hook will return False and deny access to that action.
The SQL query is called with a number of named parameters. You can use any of these as part of the query.
The list of parameters is as follows:
action - the action, e.g. ""view-database""
resource_1 - the first component of the resource, if one was passed
resource_2 - the second component of the resource, if available
actor_* - a parameter for every key on the actor. Usually actor_id is present.
If any rows are returned, the permission check passes. If no rows are returned the check fails.
Another example table, this time granting explicit access to individual tables. Consider a table called table_access that looks like this:
user_id
database
table
1
mydb
dogs
2
mydb
dogs
1
mydb
cats
The following SQL query would grant access to the dogs ttable in the mydb.db database to users 1 and 2 - but would forbid access for user 2 to the cats table:
SELECT*FROM
table_access
WHERE
user_id = :actor_id
AND""database""= :resource_1
AND""table""= :resource_2
In a metadata.yaml configuration file that would look like this:
databases:
mydb:
allow_sql: {}plugins:
datasette-permissions-sql:
- action: view-tablesql: |- SELECT * FROM table_access WHERE user_id = :actor_id AND ""database"" = :resource_1 AND ""table"" = :resource_2
We're using allow_sql: {} here to disable arbitrary SQL queries. This prevents users from running select * from cats directly to work around the permissions limits.
Fallback mode
The default behaviour of this plugin is to take full control of specified permissions. The SQL query will directly control if the user is allowed or denied access to the permission.
This means that the default policy for each permission (which in Datasette core is ""allow"" for view-database and friends) will be ignored. It also means that any other permission_allowed plugins will not get their turn once this plugin has executed.
You can change this on a per-rule basis using ""fallback"": true:
{
""action"": ""view-table"",
""resource"": [""mydatabase"", ""mytable""],
""sql"": ""select * from admins where user_id = :actor_id"",
""fallback"": true
}
When running in fallback mode, a query result returning no rows will cause the plugin hook to return None - which means ""I have no opinion on this permission, fall back to other plugins or the default"".
In this mode you can still return False (for ""deny access"") by returning a single row with a single value of -1. For example:
{
""action"": ""view-table"",
""resource"": [""mydatabase"", ""mytable""],
""sql"": ""select -1 from banned where user_id = :actor_id"",
""fallback"": true
}
",,,,,,
271665336,MDEwOlJlcG9zaXRvcnkyNzE2NjUzMzY=,datasette-auth-tokens,simonw/datasette-auth-tokens,0,9599,https://github.com/simonw/datasette-auth-tokens,Datasette plugin for authenticating access using API tokens,0,2020-06-11T23:23:30Z,2021-10-15T00:52:53Z,2021-10-15T00:54:20Z,,34,4,4,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",1,0,4,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,1,3,"# datasette-auth-tokens
[](https://pypi.org/project/datasette-auth-tokens/)
[](https://github.com/simonw/datasette-auth-tokens/releases)
[](https://github.com/simonw/datasette-auth-tokens/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-auth-tokens/blob/main/LICENSE)
Datasette plugin for authenticating access using API tokens
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-auth-tokens
## Hard-coded tokens
Read about Datasette's [authentication and permissions system](https://datasette.readthedocs.io/en/latest/authentication.html).
This plugin lets you configure secret API tokens which can be used to make authenticated requests to Datasette.
First, create a random API token. A useful recipe for doing that is the following:
$ python -c 'import secrets; print(secrets.token_hex(32))'
5f9a486dd807de632200b17508c75002bb66ca6fde1993db1de6cbd446362589
Decide on the actor that this token should represent, for example:
```json
{
""bot_id"": ""my-bot""
}
```
You can then use `""allow""` blocks to provide that token with permission to access specific actions. To enable access to a configured writable SQL query you could use this in your `metadata.json`:
```json
{
""plugins"": {
""datasette-auth-tokens"": {
""tokens"": [
{
""token"": {
""$env"": ""BOT_TOKEN""
},
""actor"": {
""bot_id"": ""my-bot""
}
}
]
}
},
""databases"": {
"":memory:"": {
""queries"": {
""show_version"": {
""sql"": ""select sqlite_version()"",
""allow"": {
""bot_id"": ""my-bot""
}
}
}
}
}
}
```
This uses Datasette's [secret configuration values mechanism](https://datasette.readthedocs.io/en/stable/plugins.html#secret-configuration-values) to allow the secret token to be passed as an environment variable.
Run Datasette like this:
BOT_TOKEN=""this-is-the-secret-token"" \
datasette -m metadata.json
You can now run authenticated API queries like this:
$ curl -H 'Authorization: Bearer this-is-the-secret-token' \
'http://127.0.0.1:8001/:memory:/show_version.json?_shape=array'
[{""sqlite_version()"": ""3.31.1""}]
Additionally you can allow passing the token as a query string parameter, although that's disabled by default given the security implications of URLs with secret tokens included. This may be useful to easily allow embedding data between different services.
Simply enable it using the `param` config value:
```json
{
""plugins"": {
""datasette-auth-tokens"": {
""tokens"": [
{
""token"": {
""$env"": ""BOT_TOKEN""
},
""actor"": {
""bot_id"": ""my-bot""
},
}
],
""param"": ""_auth_token""
}
},
""databases"": {
"":memory:"": {
""queries"": {
""show_version"": {
""sql"": ""select sqlite_version()"",
""allow"": {
""bot_id"": ""my-bot""
}
}
}
}
}
}
```
You can now run authenticated API queries like this:
$ curl http://127.0.0.1:8001/:memory:/show_version.json?_shape=array&_auth_token=this-is-the-secret-token
[{""sqlite_version()"": ""3.31.1""}]
## Tokens from your database
As an alternative (or in addition) to the hard-coded list of tokens you can store tokens in a database table and configure the plugin to access them using a SQL query.
Your query needs to take a `:token_id` parameter and return at least two columns: one called `token_secret` and one called `actor_*` - usually `actor_id`. Further `actor_` prefixed columns can be returned to provide more details for the authenticated actor.
Here's a simple example of a configuration query:
```sql
select actor_id, actor_name, token_secret from tokens where token_id = :token_id
```
This can run against a table like this one:
| token_id | token_secret | actor_id | actor_name |
| -------- | ------------ | -------- | ---------- |
| 1 | bd3c94f51fcd | 78 | Cleopaws |
| 2 | 86681b4d6f66 | 32 | Pancakes |
The tokens are formed as the token ID, then a hyphen, then the token secret. For example:
- `1-bd3c94f51fcd`
- `2-86681b4d6f66`
The SQL query will be executed with the portion before the hyphen as the `:token_id` parameter.
The `token_secret` value returned by the query will be compared to the portion of the token after the hyphen to check if the token is valid.
Columns with a prefix of `actor_` will be used to populate the actor dictionary. In the above example, a token of `2-86681b4d6f66` will become an actor dictionary of `{""id"": 32, ""name"": ""Pancakes""}`.
To configure this, use a `""query""` block in your plugin configuration like this:
```json
{
""plugins"": {
""datasette-auth-tokens"": {
""query"": {
""sql"": ""select actor_id, actor_name, token_secret from tokens where token_id = :token_id"",
""database"": ""tokens""
}
}
},
""databases"": {
""tokens"": {
""allow"": {}
}
}
}
```
The `""sql""` key here contains the SQL query. The `""database""` key has the name of the attached database file that the query should be executed against - in this case it would execute against `tokens.db`.
### Securing your tokens
Anyone with access to your Datasette instance can use it to read the `token_secret` column in your tokens table. This probably isn't what you want!
To avoid this, you should lock down access to that table. The configuration example above shows how to do this using an `""allow"": {}` block. Consult Datasette's [Permissions documentation](https://datasette.readthedocs.io/en/stable/authentication.html#permissions) for more information about how to lock down this kind of access.
","
datasette-auth-tokens
Datasette plugin for authenticating access using API tokens
Installation
Install this plugin in the same environment as Datasette.
Decide on the actor that this token should represent, for example:
{
""bot_id"": ""my-bot""
}
You can then use ""allow"" blocks to provide that token with permission to access specific actions. To enable access to a configured writable SQL query you could use this in your metadata.json:
Additionally you can allow passing the token as a query string parameter, although that's disabled by default given the security implications of URLs with secret tokens included. This may be useful to easily allow embedding data between different services.
As an alternative (or in addition) to the hard-coded list of tokens you can store tokens in a database table and configure the plugin to access them using a SQL query.
Your query needs to take a :token_id parameter and return at least two columns: one called token_secret and one called actor_* - usually actor_id. Further actor_ prefixed columns can be returned to provide more details for the authenticated actor.
Here's a simple example of a configuration query:
select actor_id, actor_name, token_secret from tokens where token_id = :token_id
This can run against a table like this one:
token_id
token_secret
actor_id
actor_name
1
bd3c94f51fcd
78
Cleopaws
2
86681b4d6f66
32
Pancakes
The tokens are formed as the token ID, then a hyphen, then the token secret. For example:
1-bd3c94f51fcd
2-86681b4d6f66
The SQL query will be executed with the portion before the hyphen as the :token_id parameter.
The token_secret value returned by the query will be compared to the portion of the token after the hyphen to check if the token is valid.
Columns with a prefix of actor_ will be used to populate the actor dictionary. In the above example, a token of 2-86681b4d6f66 will become an actor dictionary of {""id"": 32, ""name"": ""Pancakes""}.
To configure this, use a ""query"" block in your plugin configuration like this:
The ""sql"" key here contains the SQL query. The ""database"" key has the name of the attached database file that the query should be executed against - in this case it would execute against tokens.db.
Securing your tokens
Anyone with access to your Datasette instance can use it to read the token_secret column in your tokens table. This probably isn't what you want!
To avoid this, you should lock down access to that table. The configuration example above shows how to do this using an ""allow"": {} block. Consult Datasette's Permissions documentation for more information about how to lock down this kind of access.
",1,public,0,,,
272098486,MDEwOlJlcG9zaXRvcnkyNzIwOTg0ODY=,datasette-psutil,simonw/datasette-psutil,0,9599,https://github.com/simonw/datasette-psutil,Datasette plugin adding a /-/psutil debugging endpoint,0,2020-06-13T22:57:07Z,2022-03-07T15:36:30Z,2022-03-07T15:35:57Z,https://datasette.io/plugins/datasette-psutil,12,2,2,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""psutil""]",0,1,2,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-psutil
[](https://pypi.org/project/datasette-psutil/)
[](https://github.com/simonw/datasette-psutil/releases)
[](https://github.com/simonw/datasette-psutil/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-psutil/blob/main/LICENSE)
Datasette plugin adding a `/-/psutil` debugging endpoint
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-psutil
## Usage
Visit `/-/psutil` on your Datasette instance to see various information provided by [psutil](https://psutil.readthedocs.io/).
## Demo
https://latest-with-plugins.datasette.io/-/psutil is a live demo of this plugin, hosted on Google Cloud Run.
","
datasette-psutil
Datasette plugin adding a /-/psutil debugging endpoint
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-psutil
Usage
Visit /-/psutil on your Datasette instance to see various information provided by psutil.
",1,public,0,,,
273609879,MDEwOlJlcG9zaXRvcnkyNzM2MDk4Nzk=,datasette-saved-queries,simonw/datasette-saved-queries,0,9599,https://github.com/simonw/datasette-saved-queries,Datasette plugin that lets users save and execute queries,0,2020-06-20T00:20:42Z,2020-09-24T05:08:37Z,2020-08-15T23:38:46Z,,12,2,2,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-saved-queries
[](https://pypi.org/project/datasette-saved-queries/)
[](https://github.com/simonw/datasette-saved-queries/releases)
[](https://github.com/simonw/datasette-saved-queries/blob/master/LICENSE)
Datasette plugin that lets users save and execute queries
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-saved-queries
## Usage
When the plugin is installed Datasette will automatically create a `saved_queries` table in the first connected database when it starts up.
It also creates a `save_query` writable canned query which you can use to save new queries.
Queries that you save will be added to the query list on the database page.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-saved-queries
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-saved-queries
Datasette plugin that lets users save and execute queries
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-saved-queries
Usage
When the plugin is installed Datasette will automatically create a saved_queries table in the first connected database when it starts up.
It also creates a save_query writable canned query which you can use to save new queries.
Queries that you save will be added to the query list on the database page.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-saved-queries
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
274264484,MDEwOlJlcG9zaXRvcnkyNzQyNjQ0ODQ=,sqlite-generate,simonw/sqlite-generate,0,9599,https://github.com/simonw/sqlite-generate,Tool for generating demo SQLite databases,0,2020-06-22T23:36:44Z,2021-02-27T15:25:26Z,2021-02-27T15:25:24Z,https://sqlite-generate-demo.datasette.io/,56,17,17,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""sqlite"", ""datasette-io"", ""datasette-tool""]",0,0,17,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# sqlite-generate
[](https://pypi.org/project/sqlite-generate/)
[](https://github.com/simonw/sqlite-generate/releases)
[](https://github.com/simonw/sqlite-generate/blob/master/LICENSE)
Tool for generating demo SQLite databases
## Installation
Install this plugin using `pip`:
$ pip install sqlite-generate
## Demo
You can see a demo of the database generated using this command running in [Datasette](https://github.com/simonw/datasette) at https://sqlite-generate-demo.datasette.io/
The demo is generated using the following command:
sqlite-generate demo.db --seed seed --fts --columns=10 --fks=0,3 --pks=0,2
## Usage
To generate a SQLite database file called `data.db` with 10 randomly named tables in it, run the following:
sqlite-generate data.db
You can use the `--tables` option to generate a different number of tables:
sqlite-generate data.db --tables 20
You can run the command against the same database file multiple times to keep adding new tables, using different settings for each batch of generated tables.
By default each table will contain a random number of rows between 0 and 200. You can customize this with the `--rows` option:
sqlite-generate data.db --rows 20
This will insert 20 rows into each table.
sqlite-generate data.db --rows 500,2000
This inserts a random number of rows between 500 and 2000 into each table.
Each table will have 5 columns. You can change this using `--columns`:
sqlite-generate data.db --columns 10
`--columns` can also accept a range:
sqlite-generate data.db --columns 5,15
You can control the random number seed used with the `--seed` option. This will result in the exact same database file being created by multiple runs of the tool:
sqlite-generate data.db --seed=myseed
By default each table will contain between 0 and 2 foreign key columns to other tables. You can control this using the `--fks` option, with either a single number or a range:
sqlite-generate data.db --columns=20 --fks=5,15
Each table will have a single primary key column called `id`. You can use the `--pks=` option to change the number of primary key columns on each table. Drop it to 0 to generate [rowid tables](https://www.sqlite.org/rowidtable.html). Increase it above 1 to generate tables with compound primary keys. Or use a range to get a random selection of different primary key layouts:
sqlite-generate data.db --pks=0,2
To configure [SQLite full-text search](https://www.sqlite.org/fts5.html) for all columns of type text, use `--fts`:
sqlite-generate data.db --fts
This will use FTS5 by default. To use [FTS4](https://www.sqlite.org/fts3.html) instead, use `--fts4`.
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sqlite-generate
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
To generate a SQLite database file called data.db with 10 randomly named tables in it, run the following:
sqlite-generate data.db
You can use the --tables option to generate a different number of tables:
sqlite-generate data.db --tables 20
You can run the command against the same database file multiple times to keep adding new tables, using different settings for each batch of generated tables.
By default each table will contain a random number of rows between 0 and 200. You can customize this with the --rows option:
sqlite-generate data.db --rows 20
This will insert 20 rows into each table.
sqlite-generate data.db --rows 500,2000
This inserts a random number of rows between 500 and 2000 into each table.
Each table will have 5 columns. You can change this using --columns:
sqlite-generate data.db --columns 10
--columns can also accept a range:
sqlite-generate data.db --columns 5,15
You can control the random number seed used with the --seed option. This will result in the exact same database file being created by multiple runs of the tool:
sqlite-generate data.db --seed=myseed
By default each table will contain between 0 and 2 foreign key columns to other tables. You can control this using the --fks option, with either a single number or a range:
sqlite-generate data.db --columns=20 --fks=5,15
Each table will have a single primary key column called id. You can use the --pks= option to change the number of primary key columns on each table. Drop it to 0 to generate rowid tables. Increase it above 1 to generate tables with compound primary keys. Or use a range to get a random selection of different primary key layouts:
This will use FTS5 by default. To use FTS4 instead, use --fts4.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sqlite-generate
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
274293597,MDEwOlJlcG9zaXRvcnkyNzQyOTM1OTc=,datasette-block-robots,simonw/datasette-block-robots,0,9599,https://github.com/simonw/datasette-block-robots,Datasette plugin that blocks robots and crawlers using robots.txt,0,2020-06-23T02:52:23Z,2022-08-30T16:13:40Z,2022-08-30T16:25:38Z,https://datasette.io/plugins/datasette-block-robots,21,2,2,Python,1,1,1,1,0,0,0,0,0,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""robots-txt""]",0,0,2,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-block-robots
[](https://pypi.org/project/datasette-block-robots/)
[](https://github.com/simonw/datasette-block-robots/releases)
[](https://github.com/simonw/datasette-block-robots/blob/master/LICENSE)
Datasette plugin that blocks robots and crawlers using robots.txt
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-block-robots
## Usage
Having installed the plugin, `/robots.txt` on your Datasette instance will return the following:
User-agent: *
Disallow: /
This will request all robots and crawlers not to visit any of the pages on your site.
Here's a demo of the plugin in action: https://sqlite-generate-demo.datasette.io/robots.txt
## Configuration
By default the plugin will block all access to the site, using `Disallow: /`.
If you want the index page to be indexed by search engines without crawling the database, table or row pages themselves, you can use the following:
```json
{
""plugins"": {
""datasette-block-robots"": {
""allow_only_index"": true
}
}
}
```
This will return a `/robots.txt` like so:
User-agent: *
Disallow: /db1
Disallow: /db2
With a `Disallow` line for every attached database.
To block access to specific areas of the site using custom paths, add this to your `metadata.json` configuration file:
```json
{
""plugins"": {
""datasette-block-robots"": {
""disallow"": [""/mydatabase/mytable""]
}
}
}
```
This will result in a `/robots.txt` that looks like this:
User-agent: *
Disallow: /mydatabase/mytable
Alternatively you can set the full contents of the `robots.txt` file using the `literal` configuration option. Here's how to do that if you are using YAML rather than JSON and have a `metadata.yml` file:
```yaml
plugins:
datasette-block-robots:
literal: |-
User-agent: *
Disallow: /
User-agent: Bingbot
User-agent: Googlebot
Disallow:
```
This example would block all crawlers with the exception of Googlebot and Bingbot, which are allowed to crawl the entire site.
## Extending this with other plugins
This plugin adds a new [plugin hook](https://docs.datasette.io/en/stable/plugin_hooks.html) to Datasete called `block_robots_extra_lines()` which can be used by other plugins to add their own additional lines to the `robots.txt` file.
The hook can optionally accept these parameters:
- `datasette`: The current [Datasette instance](https://docs.datasette.io/en/stable/internals.html#datasette-class). You can use this to execute SQL queries or read plugin configuration settings.
- `request`: The [Request object](https://docs.datasette.io/en/stable/internals.html#request-object) representing the incoming request to `/robots.txt`.
The hook should return a list of strings, each representing a line to be added to the `robots.txt` file.
It can also return an `async def` function, which will be awaited and used to generate a list of lines. Use this option if you need to make `await` calls inside you hook implementation.
This example uses the hook to add a `Sitemap: http://example.com/sitemap.xml` line to the `robots.txt` file:
```python
from datasette import hookimpl
@hookimpl
def block_robots_extra_lines(datasette, request):
return [
""Sitemap: {}"".format(datasette.absolute_url(request, ""/sitemap.xml"")),
]
```
This example blocks access to paths based on a database query:
```python
@hookimpl
def block_robots_extra_lines(datasette):
async def inner():
db = datasette.get_database()
result = await db.execute(""select path from mytable"")
return [
""Disallow: /{}"".format(row[""path""]) for row in result
]
return inner
```
[datasette-sitemap](https://datasette.io/plugins/datasette-sitemap) is an example of a plugin that uses this hook.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-block-robots
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-block-robots
Datasette plugin that blocks robots and crawlers using robots.txt
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-block-robots
Usage
Having installed the plugin, /robots.txt on your Datasette instance will return the following:
User-agent: *
Disallow: /
This will request all robots and crawlers not to visit any of the pages on your site.
This will result in a /robots.txt that looks like this:
User-agent: *
Disallow: /mydatabase/mytable
Alternatively you can set the full contents of the robots.txt file using the literal configuration option. Here's how to do that if you are using YAML rather than JSON and have a metadata.yml file:
This example would block all crawlers with the exception of Googlebot and Bingbot, which are allowed to crawl the entire site.
Extending this with other plugins
This plugin adds a new plugin hook to Datasete called block_robots_extra_lines() which can be used by other plugins to add their own additional lines to the robots.txt file.
The hook can optionally accept these parameters:
datasette: The current Datasette instance. You can use this to execute SQL queries or read plugin configuration settings.
request: The Request object representing the incoming request to /robots.txt.
The hook should return a list of strings, each representing a line to be added to the robots.txt file.
It can also return an async def function, which will be awaited and used to generate a list of lines. Use this option if you need to make await calls inside you hook implementation.
This example uses the hook to add a Sitemap: http://example.com/sitemap.xml line to the robots.txt file:
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-block-robots
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,0,
275615947,MDEwOlJlcG9zaXRvcnkyNzU2MTU5NDc=,datasette-glitch,simonw/datasette-glitch,0,9599,https://github.com/simonw/datasette-glitch,Utilities to help run Datasette on Glitch,0,2020-06-28T15:41:25Z,2020-07-01T22:48:35Z,2020-07-01T22:49:22Z,,3,1,1,Python,1,1,1,1,0,0,0,0,0,,"[""glitch"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-glitch
[](https://pypi.org/project/datasette-glitch/)
[](https://github.com/simonw/datasette-glitch/releases)
[](https://github.com/simonw/datasette-glitch/blob/master/LICENSE)
Utilities to help run Datasette on Glitch
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-glitch
## Usage
This plugin outputs a special link which will sign you into Datasette as the root user.
Click Tools -> Logs in the Glitch editor interface after your app starts to see the link.
","
datasette-glitch
Utilities to help run Datasette on Glitch
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-glitch
Usage
This plugin outputs a special link which will sign you into Datasette as the root user.
Click Tools -> Logs in the Glitch editor interface after your app starts to see the link.
",,,,,,
275624346,MDEwOlJlcG9zaXRvcnkyNzU2MjQzNDY=,datasette-init,simonw/datasette-init,0,9599,https://github.com/simonw/datasette-init,Ensure specific tables and views exist on startup,0,2020-06-28T16:26:29Z,2021-06-14T19:43:55Z,2020-07-01T22:47:09Z,,9,1,1,Python,1,1,1,1,0,0,0,0,0,,[],0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-init
[](https://pypi.org/project/datasette-init/)
[](https://github.com/simonw/datasette-init/releases)
[](https://github.com/simonw/datasette-init/blob/master/LICENSE)
Ensure specific tables and views exist on startup
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-init
## Usage
This plugin is configured using `metadata.json` (or `metadata.yaml`).
### Creating tables
Add a block like this that specifies the tables you would like to ensure exist:
```json
{
""plugins"": {
""datasette-init"": {
""my_database"": {
""tables"": {
""dogs"": {
""columns"": {
""id"": ""integer"",
""name"": ""text"",
""age"": ""integer"",
""weight"": ""float""
},
""pk"": ""id""
}
}
}
}
}
}
```
Any tables that do not yet exist will be created when Datasette first starts.
Valid column types are `""integer""`, `""text""`, `""float""` and `""blob""`.
The `""pk""` is optional, and is used to define the primary key. To define a compound primary key (across more than one column) use a list of column names here:
```json
""pk"": [""id1"", ""id2""]
```
### Creating views
The plugin can also be used to create views:
```json
{
""plugins"": {
""datasette-init"": {
""my_database"": {
""views"": {
""my_view"": ""select 1 + 1""
}
}
}
}
}
```
Each view in the ``""views""`` block will be created when the Database first starts. If a view with the same name already exists it will be replaced with the new definition.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-init
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-init
Ensure specific tables and views exist on startup
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-init
Usage
This plugin is configured using metadata.json (or metadata.yaml).
Creating tables
Add a block like this that specifies the tables you would like to ensure exist:
Any tables that do not yet exist will be created when Datasette first starts.
Valid column types are ""integer"", ""text"", ""float"" and ""blob"".
The ""pk"" is optional, and is used to define the primary key. To define a compound primary key (across more than one column) use a list of column names here:
Each view in the ""views"" block will be created when the Database first starts. If a view with the same name already exists it will be replaced with the new definition.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-init
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
275711254,MDEwOlJlcG9zaXRvcnkyNzU3MTEyNTQ=,datasette-write,simonw/datasette-write,0,9599,https://github.com/simonw/datasette-write,Datasette plugin providing a UI for executing SQL writes against the database,0,2020-06-29T02:27:31Z,2021-09-11T06:00:31Z,2021-09-11T06:03:07Z,https://datasette.io/plugins/datasette-write,15,3,3,Python,1,1,1,1,0,2,0,0,2,,"[""datasette-io"", ""datasette-plugin""]",2,2,3,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,2,2,"# datasette-write
[](https://pypi.org/project/datasette-write/)
[](https://github.com/simonw/datasette-write/releases)
[](https://github.com/simonw/datasette-write/blob/master/LICENSE)
Datasette plugin providing a UI for writing to a database
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-write
## Usage
Having installed the plugin, visit `/-/write` on your Datasette instance to submit SQL queries that will be executed against a write connection to the specified database.
By default only the `root` user can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page.
The `datasette-write` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-write
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-write
Datasette plugin providing a UI for writing to a database
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-write
Usage
Having installed the plugin, visit /-/write on your Datasette instance to submit SQL queries that will be executed against a write connection to the specified database.
By default only the root user can access the page - so you'll need to run Datasette with the --root option and click on the link shown in the terminal to sign in and access the page.
The datasette-write permission governs access. You can use permission plugins such as datasette-permissions-sql to grant additional access to the write interface.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-write
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
279357123,MDEwOlJlcG9zaXRvcnkyNzkzNTcxMjM=,datasette-auth-passwords,simonw/datasette-auth-passwords,0,9599,https://github.com/simonw/datasette-auth-passwords,Datasette plugin for authentication using passwords,0,2020-07-13T16:34:39Z,2022-02-10T22:07:52Z,2022-03-22T01:49:50Z,https://datasette-auth-passwords-demo.datasette.io,52,12,12,Python,1,1,1,1,0,0,0,0,3,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,3,12,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-auth-passwords
[](https://pypi.org/project/datasette-auth-passwords/)
[](https://github.com/simonw/datasette-auth-passwords/releases)
[](https://github.com/simonw/datasette-auth-passwords/blob/master/LICENSE)
Datasette plugin for authenticating access using passwords
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-auth-passwords
## Demo
A demo of this plugin is running at https://datasette-auth-passwords-demo.datasette.io/
The demo is configured to show the `public.db` database to everyone, but the `private.db` database only to logged in users.
You can log in at https://datasette-auth-passwords-demo.datasette.io/-/login with username `root` and password `password!`.
## Usage
This plugin works based on a list of username/password accounts that are hard-coded into the plugin configuration.
First, you'll need to create a password hash. There are three ways to do that:
- Install the plugin, then use the interactive tool located at `/-/password-tool`
- Use the hosted version of that tool at https://datasette-auth-passwords-demo.datasette.io/-/password-tool
- Use the `datasette hash-password` command, described below
Now add the following to your `metadata.json`:
```json
{
""plugins"": {
""datasette-auth-passwords"": {
""someusername_password_hash"": {
""$env"": ""PASSWORD_HASH_1""
}
}
}
}
```
The password hash can now be specified in an environment variable when you run Datasette. You can do that like so:
PASSWORD_HASH_1='pbkdf2_sha256$...' \
datasette -m metadata.json
Be sure to use single quotes here otherwise the `$` symbols in the password hash may be incorrectly interpreted by your shell.
You will now be able to log in to your instance using the form at `/-/login` with `someusername` as the username and the password that you used to create your hash as the password.
You can include as many accounts as you like in the configuration, each with different usernames.
### datasette hash-password
The plugin exposes a new CLI command, `datasette hash-password`. You can run this without arguments to interactively create a new password hash:
```
% datasette hash-password
Password:
Repeat for confirmation:
pbkdf2_sha256$260000$1513...
```
Or if you want to use it as part of a script, you can add the `--no-confirm` option to generate a hash directly from a value passed to standard input:
```
% echo 'my password' | datasette hash-password --no-confirm
pbkdf2_sha256$260000$daa...
```
### Specifying actors
By default, a logged in user will result in an [actor block](https://datasette.readthedocs.io/en/stable/authentication.html#actors) that just contains their username:
```json
{
""id"": ""someusername""
}
```
You can customize the actor that will be used for a username by including an `""actors""` configuration block, like this:
```json
{
""plugins"": {
""datasette-auth-passwords"": {
""someusername_password_hash"": {
""$env"": ""PASSWORD_HASH_1""
},
""actors"": {
""someusername"": {
""id"": ""someusername"",
""name"": ""Some user""
}
}
}
}
}
```
### HTTP Basic authentication option
This plugin defaults to implementing login using an HTML form that sets a signed authentication cookie.
You can alternatively configure it to use [HTTP Basic authentication](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication#basic_authentication_scheme) instead.
Do this by adding `""http_basic_auth"": true` to the `datasette-auth-passwords` block in your plugin configuration.
This option introduces the following behaviour:
- Account usernames and passwords are configured in the same way as form-based authentication
- Every page within Datasette - even pages that normally do not use authentication, such as static assets - will display a browser login prompt
- Users will be unable to log out without closing their browser entirely
There is a demo of this mode at https://datasette-auth-passwords-http-basic-demo.datasette.io/ - sign in with username `root` and password `password!`
### Using with datasette publish
If you are publishing data using a [datasette publish](https://datasette.readthedocs.io/en/stable/publish.html#datasette-publish) command you can use the `--plugin-secret` option to securely configure your password hashes (see [secret configuration values](https://datasette.readthedocs.io/en/stable/plugins.html#secret-configuration-values)).
You would run the command something like this:
datasette publish cloudrun mydatabase.db \
--install datasette-auth-passwords \
--plugin-secret datasette-auth-passwords root_password_hash 'pbkdf2_sha256$...' \
--service datasette-auth-passwords-demo
This will allow you to log in as username `root` using the password that you used to create the hash.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-auth-passwords
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-auth-passwords
Datasette plugin for authenticating access using passwords
Installation
Install this plugin in the same environment as Datasette.
Be sure to use single quotes here otherwise the $ symbols in the password hash may be incorrectly interpreted by your shell.
You will now be able to log in to your instance using the form at /-/login with someusername as the username and the password that you used to create your hash as the password.
You can include as many accounts as you like in the configuration, each with different usernames.
datasette hash-password
The plugin exposes a new CLI command, datasette hash-password. You can run this without arguments to interactively create a new password hash:
% datasette hash-password
Password:
Repeat for confirmation:
pbkdf2_sha256$260000$1513...
Or if you want to use it as part of a script, you can add the --no-confirm option to generate a hash directly from a value passed to standard input:
If you are publishing data using a datasette publish command you can use the --plugin-secret option to securely configure your password hashes (see secret configuration values).
This will allow you to log in as username root using the password that you used to create the hash.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-auth-passwords
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
280500027,MDEwOlJlcG9zaXRvcnkyODA1MDAwMjc=,datasette-insert,simonw/datasette-insert,0,9599,https://github.com/simonw/datasette-insert,Datasette plugin for inserting and updating data,0,2020-07-17T18:40:34Z,2022-06-27T02:54:14Z,2022-07-22T17:52:23Z,,54,9,9,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,1,9,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-insert
[](https://pypi.org/project/datasette-insert/)
[](https://github.com/simonw/datasette-insert/releases)
[](https://github.com/simonw/datasette-insert/blob/master/LICENSE)
Datasette plugin for inserting and updating data
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-insert
This plugin should always be deployed with additional configuration to prevent unauthenticated access, see notes below.
If you are trying it out on your own local machine, you can `pip install` the [datasette-insert-unsafe](https://github.com/simonw/datasette-insert-unsafe) plugin to allow access without needing to set up authentication or permissions separately.
## Inserting data and creating tables
Start datasette and make sure it has a writable SQLite database attached to it. If you have not yet created a database file you can use this:
datasette data.db --create
The `--create` option will create a new empty `data.db` database file if it does not already exist.
The plugin adds an endpoint that allows data to be inserted or updated and tables to be created by POSTing JSON data to the following URL:
/-/insert/name-of-database/name-of-table
The JSON should look like this:
```json
[
{
""id"": 1,
""name"": ""Cleopaws"",
""age"": 5
},
{
""id"": 2,
""name"": ""Pancakes"",
""age"": 5
}
]
```
The first time data is posted to the URL a table of that name will be created if it does not aready exist, with the desired columns.
You can specify which column should be used as the primary key using the `?pk=` URL argument.
Here's how to POST to a database and create a new table using the Python `requests` library:
```python
import requests
requests.post(""http://localhost:8001/-/insert/data/dogs?pk=id"", json=[
{
""id"": 1,
""name"": ""Cleopaws"",
""age"": 5
},
{
""id"": 2,
""name"": ""Pancakes"",
""age"": 4
}
])
```
And here's how to do the same thing using `curl`:
```
curl --request POST \
--data '[
{
""id"": 1,
""name"": ""Cleopaws"",
""age"": 5
},
{
""id"": 2,
""name"": ""Pancakes"",
""age"": 4
}
]' \
'http://localhost:8001/-/insert/data/dogs?pk=id'
```
Or by piping in JSON like so:
cat dogs.json | curl --request POST -d @- \
'http://localhost:8001/-/insert/data/dogs?pk=id'
### Inserting a single row
If you are inserting a single row you can optionally send it as a dictionary rather than a list with a single item:
```
curl --request POST \
--data '{
""id"": 1,
""name"": ""Cleopaws"",
""age"": 5
}' \
'http://localhost:8001/-/insert/data/dogs?pk=id'
```
### Automatically adding new columns
If you send data to an existing table with keys that are not reflected by the existing columns, you will get an HTTP 400 error with a JSON response like this:
```json
{
""status"": 400,
""error"": ""Unknown keys: 'foo'"",
""error_code"": ""unknown_keys""
}
```
If you add `?alter=1` to the URL you are posting to any missing columns will be automatically added:
```
curl --request POST \
--data '[
{
""id"": 3,
""name"": ""Boris"",
""age"": 1,
""breed"": ""Husky""
}
]' \
'http://localhost:8001/-/insert/data/dogs?alter=1'
```
## Upserting data
An ""upsert"" operation can be used to partially update a record. With upserts you can send a subset of the keys and, if the ID matches the specified primary key, they will be used to update an existing record.
Upserts can be sent to the `/-/upsert` API endpoint.
This example will update the dog with ID=1's age from 5 to 7:
```
curl --request POST \
--data '{
""id"": 1,
""age"": 7
}' \
'http://localhost:3322/-/upsert/data/dogs?pk=id'
```
Like the `/-/insert` endpoint, the `/-/upsert` endpoint can accept an array of objects too. It also supports the `?alter=1` option.
## Permissions and authentication
This plugin defaults to denying all access, to help ensure people don't accidentally deploy it on the open internet in an unsafe configuration.
You can read about [Datasette's approach to authentication](https://datasette.readthedocs.io/en/stable/authentication.html) in the Datasette manual.
You can install the `datasette-insert-unsafe` plugin to run in unsafe mode, where all access is allowed by default.
I recommend using this plugin in conjunction with [datasette-auth-tokens](https://github.com/simonw/datasette-auth-tokens), which provides a mechanism for making authenticated calls using API tokens.
You can then use [""allow"" blocks](https://datasette.readthedocs.io/en/stable/authentication.html#defining-permissions-with-allow-blocks) in the `datasette-insert` plugin configuration to specify which authenticated tokens are allowed to make use of the API.
Here's an example `metadata.json` file which restricts access to the `/-/insert` API to an API token defined in an `INSERT_TOKEN` environment variable:
```json
{
""plugins"": {
""datasette-insert"": {
""allow"": {
""bot"": ""insert-bot""
}
},
""datasette-auth-tokens"": {
""tokens"": [
{
""token"": {
""$env"": ""INSERT_TOKEN""
},
""actor"": {
""bot"": ""insert-bot""
}
}
]
}
}
}
```
With this configuration in place you can start Datasette like this:
INSERT_TOKEN=abc123 datasette data.db -m metadata.json
You can now send data to the API using `curl` like this:
```
curl --request POST \
-H ""Authorization: Bearer abc123"" \
--data '[
{
""id"": 3,
""name"": ""Boris"",
""age"": 1,
""breed"": ""Husky""
}
]' \
'http://localhost:8001/-/insert/data/dogs'
```
Or using the Python `requests` library like so:
```python
requests.post(
""http://localhost:8001/-/insert/data/dogs"",
json={""id"": 1, ""name"": ""Cleopaws"", ""age"": 5},
headers={""Authorization"": ""bearer abc123""},
)
```
### Finely grained permissions
Using an `""allow""` block as described above grants full permission to the features enabled by the API.
The API implements several new Datasett permissions, which other plugins can use to make more finely grained decisions.
The full set of permissions are as follows:
- `insert:all` - all permissions - this is used by the `""allow""` block described above. Argument: `database_name`
- `insert:insert-update` - the ability to insert data into an existing table, or to update data by its primary key. Arguments: `(database_name, table_name)`
- `insert:create-table` - the ability to create a new table. Argument: `database_name`
- `insert:alter-table` - the ability to add columns to an existing table (using `?alter=1`). Arguments: `(database_name, table_name)`
You can use plugins like [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to hook into these more detailed permissions for finely grained control over what actions each authenticated actor can take.
Plugins that implement the [permission_allowed()](https://datasette.readthedocs.io/en/stable/plugin_hooks.html#plugin-hook-permission-allowed) plugin hook can take full control over these permission decisions.
## CORS
If you start Datasette with the `datasette --cors` option the following HTTP headers will be added to resources served by this plugin:
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type,authorization
Access-Control-Allow-Methods: POST
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-insert
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-insert
Datasette plugin for inserting and updating data
Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-insert
This plugin should always be deployed with additional configuration to prevent unauthenticated access, see notes below.
If you are trying it out on your own local machine, you can pip install the datasette-insert-unsafe plugin to allow access without needing to set up authentication or permissions separately.
Inserting data and creating tables
Start datasette and make sure it has a writable SQLite database attached to it. If you have not yet created a database file you can use this:
datasette data.db --create
The --create option will create a new empty data.db database file if it does not already exist.
The plugin adds an endpoint that allows data to be inserted or updated and tables to be created by POSTing JSON data to the following URL:
If you send data to an existing table with keys that are not reflected by the existing columns, you will get an HTTP 400 error with a JSON response like this:
An ""upsert"" operation can be used to partially update a record. With upserts you can send a subset of the keys and, if the ID matches the specified primary key, they will be used to update an existing record.
Upserts can be sent to the /-/upsert API endpoint.
This example will update the dog with ID=1's age from 5 to 7:
You can install the datasette-insert-unsafe plugin to run in unsafe mode, where all access is allowed by default.
I recommend using this plugin in conjunction with datasette-auth-tokens, which provides a mechanism for making authenticated calls using API tokens.
You can then use ""allow"" blocks in the datasette-insert plugin configuration to specify which authenticated tokens are allowed to make use of the API.
Here's an example metadata.json file which restricts access to the /-/insert API to an API token defined in an INSERT_TOKEN environment variable:
Using an ""allow"" block as described above grants full permission to the features enabled by the API.
The API implements several new Datasett permissions, which other plugins can use to make more finely grained decisions.
The full set of permissions are as follows:
insert:all - all permissions - this is used by the ""allow"" block described above. Argument: database_name
insert:insert-update - the ability to insert data into an existing table, or to update data by its primary key. Arguments: (database_name, table_name)
insert:create-table - the ability to create a new table. Argument: database_name
insert:alter-table - the ability to add columns to an existing table (using ?alter=1). Arguments: (database_name, table_name)
You can use plugins like datasette-permissions-sql to hook into these more detailed permissions for finely grained control over what actions each authenticated actor can take.
Plugins that implement the permission_allowed() plugin hook can take full control over these permission decisions.
CORS
If you start Datasette with the datasette --cors option the following HTTP headers will be added to resources served by this plugin:
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type,authorization
Access-Control-Allow-Methods: POST
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-insert
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,0,
281481347,MDEwOlJlcG9zaXRvcnkyODE0ODEzNDc=,datasette-copyable,simonw/datasette-copyable,0,9599,https://github.com/simonw/datasette-copyable,Datasette plugin for outputting tables in formats suitable for copy and paste,0,2020-07-21T19:04:08Z,2022-03-26T20:02:45Z,2022-03-26T20:02:42Z,,11,11,11,Python,1,1,1,1,0,0,0,0,0,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,0,11,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-copyable
[](https://pypi.org/project/datasette-copyable/)
[](https://github.com/simonw/datasette-copyable/releases)
[](https://github.com/simonw/datasette-copyable/blob/master/LICENSE)
Datasette plugin for outputting tables in formats suitable for copy and paste
## Installation
Install this plugin in the same environment as Datasette.
$ pip install datasette-copyable
## Demo
You can try this plugin on [fivethirtyeight.datasettes.com](https://fivethirtyeight.datasettes.com/) - browse for tables or queries there and look for the ""copyable"" link. Here's an example for a table of [airline safety data](https://fivethirtyeight.datasettes.com/fivethirtyeight/airline-safety~2Fairline-safety.copyable).
## Usage
This plugin adds a `.copyable` output extension to every table, view and query.
Navigating to this page will show an interface allowing you to select a format for copying and pasting the demo. The default is TSV, which is suitable for copying into Google Sheets or Excel.
You can add `?_raw=1` to get back just the raw data.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-copyable
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-copyable
Datasette plugin for outputting tables in formats suitable for copy and paste
Installation
Install this plugin in the same environment as Datasette.
This plugin adds a .copyable output extension to every table, view and query.
Navigating to this page will show an interface allowing you to select a format for copying and pasting the demo. The default is TSV, which is suitable for copying into Google Sheets or Excel.
You can add ?_raw=1 to get back just the raw data.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-copyable
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
284383265,MDEwOlJlcG9zaXRvcnkyODQzODMyNjU=,datasette-graphql,simonw/datasette-graphql,0,9599,https://github.com/simonw/datasette-graphql,Datasette plugin providing an automatic GraphQL API for your SQLite databases,0,2020-08-02T03:31:58Z,2022-07-17T02:00:26Z,2022-07-18T21:13:34Z,https://datasette-graphql-demo.datasette.io/,715,63,63,Python,1,1,1,1,0,5,0,0,8,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""graphql"", ""sqlite""]",5,8,63,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,5,3,"# datasette-graphql
[](https://pypi.org/project/datasette-graphql/)
[](https://github.com/simonw/datasette-graphql/releases)
[](https://github.com/simonw/datasette-graphql/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-graphql/blob/main/LICENSE)
**Datasette plugin providing an automatic GraphQL API for your SQLite databases**
Read more about this project: [GraphQL in Datasette with the new datasette-graphql plugin](https://simonwillison.net/2020/Aug/7/datasette-graphql/)
Try out a live demo at [datasette-graphql-demo.datasette.io/graphql](https://datasette-graphql-demo.datasette.io/graphql?query=%7B%0A%20%20repos(first%3A10%2C%20search%3A%20%22sql%22%2C%20sort_desc%3A%20created_at)%20%7B%0A%20%20%20%20totalCount%0A%20%20%20%20pageInfo%20%7B%0A%20%20%20%20%20%20endCursor%0A%20%20%20%20%20%20hasNextPage%0A%20%20%20%20%7D%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20description_%0A%20%20%20%20%09stargazers_count%0A%20%20%20%20%20%20created_at%0A%20%20%20%20%20%20owner%20%7B%0A%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20html_url%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
- [Installation](#installation)
- [Usage](#usage)
* [Querying for tables and columns](#querying-for-tables-and-columns)
* [Fetching a single record](#fetching-a-single-record)
* [Accessing nested objects](#accessing-nested-objects)
* [Accessing related objects](#accessing-related-objects)
* [Filtering tables](#filtering-tables)
* [Sorting](#sorting)
* [Pagination](#pagination)
* [Search](#search)
* [Columns containing JSON strings](#columns-containing-json-strings)
* [Auto camelCase](#auto-camelcase)
* [CORS](#cors)
* [Execution limits](#execution-limits)
- [The graphql() template function](#the-graphql-template-function)
- [Adding custom fields with plugins](#adding-custom-fields-with-plugins)
- [Development](#development)

## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-graphql
## Usage
This plugin sets up `/graphql` as a GraphQL endpoint for the first attached database.
If you have multiple attached databases each will get its own endpoint at `/graphql/name_of_database`.
The automatically generated GraphQL schema is available at `/graphql/name_of_database.graphql` - here's [an example](https://datasette-graphql-demo.datasette.io/graphql/github.graphql).
### Querying for tables and columns
Individual tables (and SQL views) can be queried like this:
```graphql
{
repos {
nodes {
id
full_name
description_
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%20%7B%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20description_%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
In this example query the underlying database table is called `repos` and its columns include `id`, `full_name` and `description`. Since `description` is a reserved word the query needs to ask for `description_` instead.
### Fetching a single record
If you only want to fetch a single record - for example if you want to fetch a row by its primary key - you can use the `tablename_row` field:
```graphql
{
repos_row(id: 107914493) {
id
full_name
description_
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos_row%28id%3A%20107914493%29%20%7B%0A%20%20%20%20id%0A%20%20%20%20full_name%0A%20%20%20%20description_%0A%20%20%7D%0A%7D%0A)
The `tablename_row` field accepts the primary key column (or columns) as arguments. It also supports the same `filter:`, `search:`, `sort:` and `sort_desc:` arguments as the `tablename` field, described below.
### Accessing nested objects
If a column is a foreign key to another table, you can request columns from the table pointed to by that foreign key using a nested query like this:
```graphql
{
repos {
nodes {
id
full_name
owner {
id
login
}
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%20%7B%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20id%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20owner%20%7B%0A%20%20%20%20%20%20%20%20id%0A%20%20%20%20%20%20%20%20login%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
### Accessing related objects
If another table has a foreign key back to the table you are accessing, you can fetch rows from that related table.
Consider a `users` table which is related to `repos` - a repo has a foreign key back to the user that owns the repository. The `users` object type will have a `repos_by_owner_list` field which can be used to access those related repos:
```graphql
{
users(first: 1, search: ""simonw"") {
nodes {
name
repos_by_owner_list(first: 5) {
totalCount
nodes {
full_name
}
}
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20users%28first%3A%201%2C%20search%3A%20%22simonw%22%29%20%7B%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20name%0A%20%20%20%20%20%20repos_by_owner_list%28first%3A%205%29%20%7B%0A%20%20%20%20%20%20%20%20totalCount%0A%20%20%20%20%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20%20%20%7D%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
### Filtering tables
You can filter the rows returned for a specific table using the `filter:` argument. This accepts a filter object mapping columns to operations. For example, to return just repositories with the Apache 2 license and more than 10 stars:
```graphql
{
repos(filter: {license: {eq: ""apache-2.0""}, stargazers_count: {gt: 10}}) {
nodes {
full_name
stargazers_count
license {
key
}
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%28filter%3A%20%7Blicense%3A%20%7Beq%3A%20%22apache-2.0%22%7D%2C%20stargazers_count%3A%20%7Bgt%3A%2010%7D%7D%29%20%7B%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20stargazers_count%0A%20%20%20%20%20%20license%20%7B%0A%20%20%20%20%20%20%20%20key%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
See [table filters examples](https://github.com/simonw/datasette-graphql/blob/main/examples/filters.md) for more operations, and [column filter arguments](https://docs.datasette.io/en/stable/json_api.html#column-filter-arguments) in the Datasette documentation for details of how those operations work.
These same filters can be used on nested relationships, like so:
```graphql
{
users_row(id: 9599) {
name
repos_by_owner_list(filter: {name: {startswith: ""datasette-""}}) {
totalCount
nodes {
full_name
}
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20users_row%28id%3A%209599%29%20%7B%0A%20%20%20%20name%0A%20%20%20%20repos_by_owner_list%28filter%3A%20%7Bname%3A%20%7Bstartswith%3A%20%22datasette-%22%7D%7D%29%20%7B%0A%20%20%20%20%20%20totalCount%0A%20%20%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
The `where:` argument can be used as an alternative to `filter:` when the thing you are expressing is too complex to be modeled using a filter expression. It accepts a string fragment of SQL that will be included in the `WHERE` clause of the SQL query.
```graphql
{
repos(where: ""name='sqlite-utils' or name like 'datasette-%'"") {
totalCount
nodes {
full_name
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%28where%3A%20%22name%3D%27sqlite-utils%27%20or%20name%20like%20%27datasette-%25%27%22%29%20%7B%0A%20%20%20%20totalCount%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
### Sorting
You can set a sort order for results from a table using the `sort:` or `sort_desc:` arguments. The value for this argument should be the name of the column you wish to sort (or sort-descending) by.
```graphql
{
repos(sort_desc: stargazers_count) {
nodes {
full_name
stargazers_count
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%28sort_desc%3A%20stargazers_count%29%20%7B%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20stargazers_count%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
### Pagination
By default the first 10 rows will be returned. You can control this using the `first:` argument.
```graphql
{
repos(first: 20) {
totalCount
pageInfo {
hasNextPage
endCursor
}
nodes {
full_name
stargazers_count
license {
key
}
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%28first%3A%2020%29%20%7B%0A%20%20%20%20totalCount%0A%20%20%20%20pageInfo%20%7B%0A%20%20%20%20%20%20hasNextPage%0A%20%20%20%20%20%20endCursor%0A%20%20%20%20%7D%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20stargazers_count%0A%20%20%20%20%20%20license%20%7B%0A%20%20%20%20%20%20%20%20key%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
The `totalCount` field returns the total number of records that match the query.
Requesting the `pageInfo.endCursor` field provides you with the value you need to request the next page. You can pass this to the `after:` argument to request the next page.
```graphql
{
repos(first: 20, after: ""134874019"") {
totalCount
pageInfo {
hasNextPage
endCursor
}
nodes {
full_name
stargazers_count
license {
key
}
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%28first%3A%2020%2C%20after%3A%20%22134874019%22%29%20%7B%0A%20%20%20%20totalCount%0A%20%20%20%20pageInfo%20%7B%0A%20%20%20%20%20%20hasNextPage%0A%20%20%20%20%20%20endCursor%0A%20%20%20%20%7D%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20stargazers_count%0A%20%20%20%20%20%20license%20%7B%0A%20%20%20%20%20%20%20%20key%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
The `hasNextPage` field tells you if there are any more records.
### Search
If a table has been configured to use SQLite full-text search you can execute searches against it using the `search:` argument:
```graphql
{
repos(search: ""datasette"") {
totalCount
pageInfo {
hasNextPage
endCursor
}
nodes {
full_name
description_
}
}
}
```
[Try this query](https://datasette-graphql-demo.datasette.io/graphql?query=%0A%7B%0A%20%20repos%28search%3A%20%22datasette%22%29%20%7B%0A%20%20%20%20totalCount%0A%20%20%20%20pageInfo%20%7B%0A%20%20%20%20%20%20hasNextPage%0A%20%20%20%20%20%20endCursor%0A%20%20%20%20%7D%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20description_%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A)
The [sqlite-utils](https://sqlite-utils.datasette.io/) Python library and CLI tool can be used to add full-text search to an existing database table.
### Columns containing JSON strings
If your table has a column that contains data encoded as JSON, `datasette-graphql` will make that column available as an encoded JSON string. Clients calling your API will need to parse the string as JSON in order to access the data.
You can return the data as a nested structure by configuring that column to be treated as a JSON column. The [plugin configuration](https://docs.datasette.io/en/stable/plugins.html#plugin-configuration) for that in `metadata.json` looks like this:
```json
{
""databases"": {
""test"": {
""tables"": {
""repos"": {
""plugins"": {
""datasette-graphql"": {
""json_columns"": [
""tags""
]
}
}
}
}
}
}
}
```
### Auto camelCase
The names of your columns and tables default to being matched by their representations in GraphQL.
If you have tables with `names_like_this` you may want to work with them in GraphQL using `namesLikeThis`, for consistency with GraphQL and JavaScript conventions.
You can turn on automatic camelCase using the `""auto_camelcase""` plugin configuration setting in `metadata.json`, like this:
```json
{
""plugins"": {
""datasette-graphql"": {
""auto_camelcase"": true
}
}
}
```
### CORS
This plugin obeys the `--cors` option passed to the `datasette` command-line tool. If you pass `--cors` it adds the following CORS HTTP headers to allow JavaScript running on other domains to access the GraphQL API:
access-control-allow-headers: content-type
access-control-allow-method: POST
access-control-allow-origin: *
### Execution limits
The plugin implements two limits by default:
- The total time spent executing all of the underlying SQL queries that make up the GraphQL execution must not exceed 1000ms (one second)
- The total number of SQL table queries executed as a result of nested GraphQL fields must not exceed 100
These limits can be customized using the `num_queries_limit` and `time_limit_ms` plugin configuration settings, for example in `metadata.json`:
```json
{
""plugins"": {
""datasette-graphql"": {
""num_queries_limit"": 200,
""time_limit_ms"": 5000
}
}
}
```
Setting these to `0` will disable the limit checks entirely.
## The graphql() template function
The plugin also makes a Jinja template function available called `graphql()`. You can use that function in your Datasette [custom templates](https://docs.datasette.io/en/stable/custom_templates.html#custom-templates) like so:
```html+jinja
{% set users = graphql(""""""
{
users {
nodes {
name
points
score
}
}
}
"""""")[""users""] %}
{% for user in users.nodes %}
{% endfor %}
```
The function executes a GraphQL query against the generated schema and returns the results. You can assign those results to a variable in your template and then loop through and display them.
By default the query will be run against the first attached database. You can use the optional second argument to the function to specify a different database - for example, to run against an attached `github.db` database you would do this:
```html+jinja
{% set user = graphql(""""""
{
users_row(id:9599) {
name
login
avatar_url
}
}
"""""", ""github"")[""users_row""] %}
Hello, {{ user.name }}
```
You can use [GraphQL variables](https://graphql.org/learn/queries/#variables) in these template calls by passing them to the `variables=` argument:
```html+jinja
{% set user = graphql(""""""
query ($id: Int) {
users_row(id: $id) {
name
login
avatar_url
}
}
"""""", database=""github"", variables={""id"": 9599})[""users_row""] %}
Hello, {{ user.name }}
```
## Adding custom fields with plugins
`datasette-graphql` adds a new [plugin hook](https://docs.datasette.io/en/stable/writing_plugins.html) to Datasette which can be used to add custom fields to your GraphQL schema.
The plugin hook looks like this:
```python
@hookimpl
def graphql_extra_fields(datasette, database):
""A list of (name, field_type) tuples to include in the GraphQL schema""
```
You can use this hook to return a list of tuples describing additional fields that should be exposed in your schema. Each tuple should consist of a string naming the new field, plus a [Graphene Field object](https://docs.graphene-python.org/en/latest/types/objecttypes/) that specifies the schema and provides a `resolver` function.
This example implementation uses `pkg_resources` to return a list of currently installed Python packages:
```python
import graphene
from datasette import hookimpl
import pkg_resources
@hookimpl
def graphql_extra_fields():
class Package(graphene.ObjectType):
""An installed package""
name = graphene.String()
version = graphene.String()
def resolve_packages(root, info):
return [
{""name"": d.project_name, ""version"": d.version}
for d in pkg_resources.working_set
]
return [
(
""packages"",
graphene.Field(
graphene.List(Package),
description=""List of installed packages"",
resolver=resolve_packages,
),
),
]
```
With this plugin installed, the following GraphQL query can be used to retrieve a list of installed packages:
```graphql
{
packages {
name
version
}
}
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-graphql
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-graphql
Datasette plugin providing an automatic GraphQL API for your SQLite databases
In this example query the underlying database table is called repos and its columns include id, full_name and description. Since description is a reserved word the query needs to ask for description_ instead.
Fetching a single record
If you only want to fetch a single record - for example if you want to fetch a row by its primary key - you can use the tablename_row field:
The tablename_row field accepts the primary key column (or columns) as arguments. It also supports the same filter:, search:, sort: and sort_desc: arguments as the tablename field, described below.
Accessing nested objects
If a column is a foreign key to another table, you can request columns from the table pointed to by that foreign key using a nested query like this:
If another table has a foreign key back to the table you are accessing, you can fetch rows from that related table.
Consider a users table which is related to repos - a repo has a foreign key back to the user that owns the repository. The users object type will have a repos_by_owner_list field which can be used to access those related repos:
You can filter the rows returned for a specific table using the filter: argument. This accepts a filter object mapping columns to operations. For example, to return just repositories with the Apache 2 license and more than 10 stars:
The where: argument can be used as an alternative to filter: when the thing you are expressing is too complex to be modeled using a filter expression. It accepts a string fragment of SQL that will be included in the WHERE clause of the SQL query.
{
repos(where: ""name='sqlite-utils' or name like 'datasette-%'"") {
totalCountnodes {
full_name
}
}
}
You can set a sort order for results from a table using the sort: or sort_desc: arguments. The value for this argument should be the name of the column you wish to sort (or sort-descending) by.
The totalCount field returns the total number of records that match the query.
Requesting the pageInfo.endCursor field provides you with the value you need to request the next page. You can pass this to the after: argument to request the next page.
The sqlite-utils Python library and CLI tool can be used to add full-text search to an existing database table.
Columns containing JSON strings
If your table has a column that contains data encoded as JSON, datasette-graphql will make that column available as an encoded JSON string. Clients calling your API will need to parse the string as JSON in order to access the data.
You can return the data as a nested structure by configuring that column to be treated as a JSON column. The plugin configuration for that in metadata.json looks like this:
The names of your columns and tables default to being matched by their representations in GraphQL.
If you have tables with names_like_this you may want to work with them in GraphQL using namesLikeThis, for consistency with GraphQL and JavaScript conventions.
You can turn on automatic camelCase using the ""auto_camelcase"" plugin configuration setting in metadata.json, like this:
This plugin obeys the --cors option passed to the datasette command-line tool. If you pass --cors it adds the following CORS HTTP headers to allow JavaScript running on other domains to access the GraphQL API:
access-control-allow-headers: content-type
access-control-allow-method: POST
access-control-allow-origin: *
Execution limits
The plugin implements two limits by default:
The total time spent executing all of the underlying SQL queries that make up the GraphQL execution must not exceed 1000ms (one second)
The total number of SQL table queries executed as a result of nested GraphQL fields must not exceed 100
These limits can be customized using the num_queries_limit and time_limit_ms plugin configuration settings, for example in metadata.json:
The function executes a GraphQL query against the generated schema and returns the results. You can assign those results to a variable in your template and then loop through and display them.
By default the query will be run against the first attached database. You can use the optional second argument to the function to specify a different database - for example, to run against an attached github.db database you would do this:
datasette-graphql adds a new plugin hook to Datasette which can be used to add custom fields to your GraphQL schema.
The plugin hook looks like this:
@hookimpldefgraphql_extra_fields(datasette, database):
""A list of (name, field_type) tuples to include in the GraphQL schema""
You can use this hook to return a list of tuples describing additional fields that should be exposed in your schema. Each tuple should consist of a string naming the new field, plus a Graphene Field object that specifies the schema and provides a resolver function.
This example implementation uses pkg_resources to return a list of currently installed Python packages:
With this plugin installed, the following GraphQL query can be used to retrieve a list of installed packages:
{
packages {
nameversion
}
}
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-graphql
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,0,
288629766,MDEwOlJlcG9zaXRvcnkyODg2Mjk3NjY=,datasette-schema-versions,simonw/datasette-schema-versions,0,9599,https://github.com/simonw/datasette-schema-versions,Datasette plugin that shows the schema version of every attached database,0,2020-08-19T04:04:39Z,2021-09-11T02:42:37Z,2021-09-11T02:44:32Z,,5,0,0,Python,1,1,1,1,0,0,0,0,0,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,0,0,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-schema-versions
[](https://pypi.org/project/datasette-schema-versions/)
[](https://github.com/simonw/datasette-schema-versions/releases)
[](https://github.com/simonw/datasette-schema-versions/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-schema-versions/blob/main/LICENSE)
Datasette plugin that shows the schema version of every attached database
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-schema-versions
## Usage
Visit `/-/schema-versions` on your Datasette instance to see a numeric version for the schema for each of your databases.
Any changes you make to the schema will increase this version number.
","
datasette-schema-versions
Datasette plugin that shows the schema version of every attached database
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-schema-versions
Usage
Visit /-/schema-versions on your Datasette instance to see a numeric version for the schema for each of your databases.
Any changes you make to the schema will increase this version number.
",,,,,,
291339086,MDEwOlJlcG9zaXRvcnkyOTEzMzkwODY=,airtable-export,simonw/airtable-export,0,9599,https://github.com/simonw/airtable-export,"Export Airtable data to YAML, JSON or SQLite files on disk",0,2020-08-29T19:51:37Z,2021-06-08T17:30:30Z,2021-04-09T23:41:52Z,https://datasette.io/tools/airtable-export,41,33,33,Python,1,1,1,1,0,5,0,0,6,apache-2.0,"[""yaml"", ""airtable"", ""airtable-api"", ""datasette-io"", ""datasette-tool""]",5,6,33,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,5,3,"# airtable-export
[](https://pypi.org/project/airtable-export/)
[](https://github.com/simonw/airtable-export/releases)
[](https://github.com/simonw/airtable-export/actions?query=workflow%3ATest)
[](https://github.com/simonw/airtable-export/blob/master/LICENSE)
Export Airtable data to files on disk
## Installation
Install this tool using `pip`:
$ pip install airtable-export
## Usage
You will need to know the following information:
- Your Airtable base ID - this is a string starting with `app...`
- Your Airtable API key - this is a string starting with `key...`
- The names of each of the tables that you wish to export
You can export all of your data to a folder called `export/` by running the following:
airtable-export export base_id table1 table2 --key=key
This example would create two files: `export/table1.yml` and `export/table2.yml`.
Rather than passing the API key using the `--key` option you can set it as an environment variable called `AIRTABLE_KEY`.
## Export options
By default the tool exports your data as YAML.
You can also export as JSON or as [newline delimited JSON](http://ndjson.org/) using the `--json` or `--ndjson` options:
airtable-export export base_id table1 table2 --key=key --ndjson
You can pass multiple format options at once. This command will create a `.json`, `.yml` and `.ndjson` file for each exported table:
airtable-export export base_id table1 table2 \
--key=key --ndjson --yaml --json
### SQLite database export
You can export tables to a SQLite database file using the `--sqlite database.db` option:
airtable-export export base_id table1 table2 \
--key=key --sqlite database.db
This can be combined with other format options. If you only specify `--sqlite` the export directory argument will be ignored.
The SQLite database will have a table created for each table you export. Those tables will have a primary key column called `airtable_id`.
If you run this command against an existing SQLite database records with matching primary keys will be over-written by new records from the export.
## Request options
By default the tool uses [python-httpx](https://www.python-httpx.org)'s default configurations.
You can override the `user-agent` using the `--user-agent` option:
airtable-export export base_id table1 table2 --key=key --user-agent ""Airtable Export Robot""
You can override the [timeout during a network read operation](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) using the `--http-read-timeout` option. If not set, this defaults to 5s.
airtable-export export base_id table1 table2 --key=key --http-read-timeout 60
## Running this using GitHub Actions
[GitHub Actions](https://github.com/features/actions) is GitHub's workflow automation product. You can use it to run `airtable-export` in order to back up your Airtable data to a GitHub repository. Doing this gives you a visible commit history of changes you make to your Airtable data - like [this one](https://github.com/natbat/rockybeaches/commits/main/airtable).
To run this for your own Airtable database you'll first need to add the following secrets to your GitHub repository:
AIRTABLE_BASE_ID
The base ID, a string beginning `app...`
AIRTABLE_KEY
Your Airtable API key
AIRTABLE_TABLES
A space separated list of the Airtable tables that you want to backup. If any of these contain spaces you will need to enclose them in single quotes, e.g. 'My table with spaces in the name' OtherTableWithNoSpaces
Once you have set those secrets, add the following as a file called `.github/workflows/backup-airtable.yml`:
```yaml
name: Backup Airtable
on:
workflow_dispatch:
schedule:
- cron: '32 0 * * *'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repo
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/cache@v2
name: Configure pip caching
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-
restore-keys: |
${{ runner.os }}-pip-
- name: Install airtable-export
run: |
pip install airtable-export
- name: Backup Airtable to backups/
env:
AIRTABLE_BASE_ID: ${{ secrets.AIRTABLE_BASE_ID }}
AIRTABLE_KEY: ${{ secrets.AIRTABLE_KEY }}
AIRTABLE_TABLES: ${{ secrets.AIRTABLE_TABLES }}
run: |-
airtable-export backups $AIRTABLE_BASE_ID $AIRTABLE_TABLES -v
- name: Commit and push if it changed
run: |-
git config user.name ""Automated""
git config user.email ""actions@users.noreply.github.com""
git add -A
timestamp=$(date -u)
git commit -m ""Latest data: ${timestamp}"" || exit 0
git push
```
This will run once a day (at 32 minutes past midnight UTC) and will also run if you manually click the ""Run workflow"" button, see [GitHub Actions: Manual triggers with workflow_dispatch](https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/).
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd airtable-export
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
airtable-export
Export Airtable data to files on disk
Installation
Install this tool using pip:
$ pip install airtable-export
Usage
You will need to know the following information:
Your Airtable base ID - this is a string starting with app...
Your Airtable API key - this is a string starting with key...
The names of each of the tables that you wish to export
You can export all of your data to a folder called export/ by running the following:
GitHub Actions is GitHub's workflow automation product. You can use it to run airtable-export in order to back up your Airtable data to a GitHub repository. Doing this gives you a visible commit history of changes you make to your Airtable data - like this one.
To run this for your own Airtable database you'll first need to add the following secrets to your GitHub repository:
AIRTABLE_BASE_ID
The base ID, a string beginning `app...`
AIRTABLE_KEY
Your Airtable API key
AIRTABLE_TABLES
A space separated list of the Airtable tables that you want to backup. If any of these contain spaces you will need to enclose them in single quotes, e.g. 'My table with spaces in the name' OtherTableWithNoSpaces
Once you have set those secrets, add the following as a file called .github/workflows/backup-airtable.yml:
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd airtable-export
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
291359358,MDEwOlJlcG9zaXRvcnkyOTEzNTkzNTg=,datasette-yaml,simonw/datasette-yaml,0,9599,https://github.com/simonw/datasette-yaml,Export Datasette records as YAML,0,2020-08-29T22:32:15Z,2020-12-28T03:20:36Z,2021-05-13T08:59:53Z,,7,2,2,Python,1,1,1,1,0,1,0,0,1,,"[""yaml"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",1,1,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,1,"# datasette-yaml
[](https://pypi.org/project/datasette-yaml/)
[](https://github.com/simonw/datasette-yaml/releases)
[](https://github.com/simonw/datasette-yaml/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-yaml/blob/main/LICENSE)
Export Datasette records as YAML
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-yaml
## Usage
Having installed this plugin, every table and query will gain a new `.yaml` export link.
You can also construct these URLs directly: `/dbname/tablename.yaml`
## Demo
The plugin is running on [covid-19.datasettes.com](https://covid-19.datasettes.co/) - for example [/covid/latest_ny_times_counties_with_populations.yaml](https://covid-19.datasettes.com/covid/latest_ny_times_counties_with_populations.yaml)
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-yaml
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-yaml
Export Datasette records as YAML
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-yaml
Usage
Having installed this plugin, every table and query will gain a new .yaml export link.
You can also construct these URLs directly: /dbname/tablename.yaml
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-yaml
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
293164447,MDEwOlJlcG9zaXRvcnkyOTMxNjQ0NDc=,datasette-backup,simonw/datasette-backup,0,9599,https://github.com/simonw/datasette-backup,Plugin adding backup options to Datasette,0,2020-09-05T22:33:29Z,2020-09-24T00:16:59Z,2020-09-07T02:27:30Z,,6,1,1,Python,1,1,1,1,0,0,0,0,3,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,3,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-backup
[](https://pypi.org/project/datasette-backup/)
[](https://github.com/simonw/datasette-backup/releases)
[](https://github.com/simonw/datasette-backup/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-backup/blob/main/LICENSE)
Plugin adding backup options to Datasette
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-backup
## Usage
Once installed, you can download a SQL backup of any of your databases from:
/-/backup/dbname.sql
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-backup
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-backup
Plugin adding backup options to Datasette
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-backup
Usage
Once installed, you can download a SQL backup of any of your databases from:
/-/backup/dbname.sql
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-backup
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
294706267,MDEwOlJlcG9zaXRvcnkyOTQ3MDYyNjc=,datasette-seaborn,simonw/datasette-seaborn,0,9599,https://github.com/simonw/datasette-seaborn,Statistical visualizations for Datasette using Seaborn,0,2020-09-11T13:43:08Z,2022-03-22T01:49:39Z,2022-03-22T01:49:36Z,https://datasette-seaborn-demo.datasette.io/,24,11,11,Python,1,1,1,1,0,0,0,0,5,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""seaborn"", ""visualization""]",0,5,11,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-seaborn
[](https://pypi.org/project/datasette-seaborn/)
[](https://github.com/simonw/datasette-seaborn/releases)
[](https://github.com/simonw/datasette-seaborn/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-seaborn/blob/main/LICENSE)
Statistical visualizations for Datasette using Seaborn
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-seaborn
## Usage
Navigate to the new `.seaborn` extension for any Datasette table.
The `_seaborn` argument specifies a method on `sns` to execute, e.g. `?_seaborn=relplot`.
Extra arguments to those methods can be specified using e.g. `&_seaborn_x=column_name`.
## Configuration
The plugin implements a default rendering time limit of five seconds. You can customize this limit using the `render_time_limit` setting, which accepts a floating point number of seconds. Add this to your `metadata.json`:
```json
{
""plugins"": {
""datasette-seaborn"": {
""render_time_limit"": 1.0
}
}
}
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-seaborn
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-seaborn
Statistical visualizations for Datasette using Seaborn
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-seaborn
Usage
Navigate to the new .seaborn extension for any Datasette table.
The _seaborn argument specifies a method on sns to execute, e.g. ?_seaborn=relplot.
Extra arguments to those methods can be specified using e.g. &_seaborn_x=column_name.
Configuration
The plugin implements a default rendering time limit of five seconds. You can customize this limit using the render_time_limit setting, which accepts a floating point number of seconds. Add this to your metadata.json:
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-seaborn
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
299143849,MDEwOlJlcG9zaXRvcnkyOTkxNDM4NDk=,datasette-dateutil,simonw/datasette-dateutil,0,9599,https://github.com/simonw/datasette-dateutil,dateutil functions for Datasette,0,2020-09-28T00:14:20Z,2022-03-01T00:09:57Z,2022-03-01T01:40:21Z,,18,6,6,Python,1,1,1,1,0,0,0,0,2,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""dateutil""]",0,2,6,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-dateutil
[](https://pypi.org/project/datasette-dateutil/)
[](https://github.com/simonw/datasette-dateutil/releases)
[](https://github.com/simonw/datasette-dateutil/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-dateutil/blob/main/LICENSE)
dateutil functions for Datasette
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-dateutil
## Usage
This function adds custom SQL functions that expose functionality from the [dateutil](https://dateutil.readthedocs.io/) Python library.
Once installed, the following SQL functions become available:
### Parsing date strings
- `dateutil_parse(text)` - returns an ISO8601 date string parsed from the text, or `null` if the input could not be parsed. `dateutil_parse(""10 october 2020 3pm"")` returns `2020-10-10T15:00:00`.
- `dateutil_parse_fuzzy(text)` - same as `dateutil_parse()` but this also works against strings that contain a date somewhere within them - that date will be returned, or `null` if no dates could be found. `dateutil_parse_fuzzy(""This is due 10 september"")` returns `2020-09-10T00:00:00` (but will start returning the 2021 version of that if the year is 2021).
The `dateutil_parse()` and `dateutil_parse_fuzzy()` functions both follow the American convention of assuming that `1/2/2020` lists the month first, evaluating this example to the 2nd of January.
If you want to assume that the day comes first, use these two functions instead:
- `dateutil_parse_dayfirst(text)`
- `dateutil_parse_fuzzy_dayfirst(text)`
Here's a query demonstrating these functions:
```sql
select
dateutil_parse(""10 october 2020 3pm""),
dateutil_parse_fuzzy(""This is due 10 september""),
dateutil_parse(""1/2/2020""),
dateutil_parse(""2020-03-04""),
dateutil_parse_dayfirst(""2020-03-04"");
```
[Try that query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_parse%28%2210+october+2020+3pm%22%29%2C%0D%0A++dateutil_parse_fuzzy%28%22This+is+due+10+september%22%29%2C%0D%0A++dateutil_parse%28%221%2F2%2F2020%22%29%2C%0D%0A++dateutil_parse%28%222020-03-04%22%29%2C%0D%0A++dateutil_parse_dayfirst%28%222020-03-04%22%29%3B)
### Optional default dates
The `dateutil_parse()`, `dateutil_parse_fuzzy()`, `dateutil_parse_dayfirst()` and `dateutil_parse_fuzzy_dayfirst()` functions all accept an optional second argument specifying a ""default"" datetime to consider if some of the details are missing. For example, the following:
```sql
select dateutil_parse('1st october', '1985-01-01')
```
Will return `1985-10-01T00:00:00` - the missing year is replaced with the year from the default date.
[Example query demonstrating the default date argument](https://latest-with-plugins.datasette.io/fixtures?sql=with+times+as+%28%0D%0A++select%0D%0A++++datetime%28%27now%27%29+as+t%0D%0A++union%0D%0A++select%0D%0A++++datetime%28%27now%27%2C+%27-1+year%27%29%0D%0A++union%0D%0A++select%0D%0A++++datetime%28%27now%27%2C+%27-3+years%27%29%0D%0A%29%0D%0Aselect+t%2C+dateutil_parse_fuzzy%28%22This+is+due+10+september%22%2C+t%29+from+times)
### Calculating Easter
- `dateutil_easter(year)` - returns the date for Easter in that year, for example `dateutil_easter(""2020"")` returns `2020-04-12`.
[Example Easter query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_easter%282019%29%2C%0D%0A++dateutil_easter%282020%29%2C%0D%0A++dateutil_easter%282021%29)
### JSON arrays of dates
Several functions return JSON arrays of date strings. These can be used with SQLite's `json_each()` function to perform joins against dates from a specific date range or recurrence rule.
These functions can return up to 10,000 results. They will return an error if more than 10,000 dates would be returned - this is to protect against denial of service attacks.
- `dateutil_dates_between('1 january 2020', '5 jan 2020')` - given two dates (in any format that can be handled by `dateutil_parse()`) this function returns a JSON string containing the dates between those two days, inclusive. This example returns `[""2020-01-01"", ""2020-01-02"", ""2020-01-03"", ""2020-01-04"", ""2020-01-05""]`.
- `dateutil_dates_between('1 january 2020', '5 jan 2020', 0)` - set the optional third argument to `0` to specify that you would like this to be exclusive of the last day. This example returns `[""2020-01-01"", ""2020-01-02"", ""2020-01-03"", ""2020-01-04""]`.
[Try these queries](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_dates_between%28%271+january+2020%27%2C+%275+jan+2020%27%29%2C%0D%0A++dateutil_dates_between%28%271+january+2020%27%2C+%275+jan+2020%27%2C+0%29)
The `dateutil_rrule()` and `dateutil_rrule_date()` functions accept the iCalendar standard ``rrule` format - see [the dateutil documentation](https://dateutil.readthedocs.io/en/stable/rrule.html#rrulestr-examples) for more examples.
This format lets you specify recurrence rules such as ""the next four last mondays of the month"".
- `dateutil_rrule(rrule, optional_dtsart)` - given an rrule returns a JSON array of ISO datetimes. The second argument is optional and will be treated as the start date for the rule.
- `dateutil_rrule_date(rrule, optional_dtsart)` - same as `dateutil_rrule()` but returns ISO dates.
Example query:
```sql
select
dateutil_rrule('FREQ=HOURLY;COUNT=5'),
dateutil_rrule_date(
'FREQ=DAILY;COUNT=3',
'1st jan 2020'
);
```
[Try the rrule example query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++dateutil_rrule('FREQ%3DHOURLY%3BCOUNT%3D5')%2C%0D%0A++dateutil_rrule_date(%0D%0A++++'FREQ%3DDAILY%3BCOUNT%3D3'%2C%0D%0A++++'1st+jan+2020'%0D%0A++)%3B)
### Joining data using json_each()
SQLite's [json_each() function](https://www.sqlite.org/json1.html#jeach) can be used to turn a JSON array of dates into a table that can be joined against other data. Here's a query that returns a table showing every day in January 2019:
```sql
select
value as date
from
json_each(
dateutil_dates_between('1 Jan 2019', '31 Jan 2019')
)
```
[Try that query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++value+as+date%0D%0Afrom%0D%0A++json_each%28%0D%0A++++dateutil_dates_between%28%271+Jan+2019%27%2C+%2731+Jan+2019%27%29%0D%0A++%29)
You can run joins against this table by assigning it a name using SQLite's [support for Common Table Expressions (CTEs)](https://sqlite.org/lang_with.html).
This example query uses `substr(created, 0, 11)` to retrieve the date portion of the `created` column in the [facetable demo table](https://latest-with-plugins.datasette.io/fixtures/facetable), then joins that against the table of days in January to calculate the count of rows created on each day. The `LEFT JOIN` against `days_in_january` ensures that days which had no created records are still returned in the results, with a count of 0.
```sql
with created_dates as (
select
substr(created, 0, 11) as date
from
facetable
),
days_in_january as (
select
value as date
from
json_each(
dateutil_dates_between('1 Jan 2019', '31 Jan 2019')
)
)
select
days_in_january.date,
count(created_dates.date) as total
from
days_in_january
left join created_dates on days_in_january.date = created_dates.date
group by
days_in_january.date;
```
[Try that query](https://latest-with-plugins.datasette.io/fixtures?sql=with+created_dates+as+%28%0D%0A++select%0D%0A++++substr%28created%2C+0%2C+11%29+as+date%0D%0A++from%0D%0A++++facetable%0D%0A%29%2C%0D%0Adays_in_january+as+%28%0D%0A++select%0D%0A++++value+as+date%0D%0A++from%0D%0A++++json_each%28%0D%0A++++++dateutil_dates_between%28%271+Jan+2019%27%2C+%2731+Jan+2019%27%29%0D%0A++++%29%0D%0A%29%0D%0Aselect%0D%0A++days_in_january.date%2C%0D%0A++count%28created_dates.date%29+as+total%0D%0Afrom%0D%0A++days_in_january%0D%0A++left+join+created_dates+on+days_in_january.date+%3D+created_dates.date%0D%0Agroup+by%0D%0A++days_in_january.date%3B#g.mark=bar&g.x_column=date&g.x_type=ordinal&g.y_column=total&g.y_type=quantitative) with a bar chart rendered using the [datasette-vega](https://github.com/simonw/datasette-vega) plugin.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-dateutil
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-dateutil
dateutil functions for Datasette
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-dateutil
Usage
This function adds custom SQL functions that expose functionality from the dateutil Python library.
Once installed, the following SQL functions become available:
Parsing date strings
dateutil_parse(text) - returns an ISO8601 date string parsed from the text, or null if the input could not be parsed. dateutil_parse(""10 october 2020 3pm"") returns 2020-10-10T15:00:00.
dateutil_parse_fuzzy(text) - same as dateutil_parse() but this also works against strings that contain a date somewhere within them - that date will be returned, or null if no dates could be found. dateutil_parse_fuzzy(""This is due 10 september"") returns 2020-09-10T00:00:00 (but will start returning the 2021 version of that if the year is 2021).
The dateutil_parse() and dateutil_parse_fuzzy() functions both follow the American convention of assuming that 1/2/2020 lists the month first, evaluating this example to the 2nd of January.
If you want to assume that the day comes first, use these two functions instead:
dateutil_parse_dayfirst(text)
dateutil_parse_fuzzy_dayfirst(text)
Here's a query demonstrating these functions:
select
dateutil_parse(""10 october 2020 3pm""),
dateutil_parse_fuzzy(""This is due 10 september""),
dateutil_parse(""1/2/2020""),
dateutil_parse(""2020-03-04""),
dateutil_parse_dayfirst(""2020-03-04"");
The dateutil_parse(), dateutil_parse_fuzzy(), dateutil_parse_dayfirst() and dateutil_parse_fuzzy_dayfirst() functions all accept an optional second argument specifying a ""default"" datetime to consider if some of the details are missing. For example, the following:
Several functions return JSON arrays of date strings. These can be used with SQLite's json_each() function to perform joins against dates from a specific date range or recurrence rule.
These functions can return up to 10,000 results. They will return an error if more than 10,000 dates would be returned - this is to protect against denial of service attacks.
dateutil_dates_between('1 january 2020', '5 jan 2020') - given two dates (in any format that can be handled by dateutil_parse()) this function returns a JSON string containing the dates between those two days, inclusive. This example returns [""2020-01-01"", ""2020-01-02"", ""2020-01-03"", ""2020-01-04"", ""2020-01-05""].
dateutil_dates_between('1 january 2020', '5 jan 2020', 0) - set the optional third argument to 0 to specify that you would like this to be exclusive of the last day. This example returns [""2020-01-01"", ""2020-01-02"", ""2020-01-03"", ""2020-01-04""].
The dateutil_rrule() and dateutil_rrule_date() functions accept the iCalendar standard ``rrule` format - see the dateutil documentation for more examples.
This format lets you specify recurrence rules such as ""the next four last mondays of the month"".
dateutil_rrule(rrule, optional_dtsart) - given an rrule returns a JSON array of ISO datetimes. The second argument is optional and will be treated as the start date for the rule.
dateutil_rrule_date(rrule, optional_dtsart) - same as dateutil_rrule() but returns ISO dates.
Example query:
select
dateutil_rrule('FREQ=HOURLY;COUNT=5'),
dateutil_rrule_date(
'FREQ=DAILY;COUNT=3',
'1st jan 2020'
);
SQLite's json_each() function can be used to turn a JSON array of dates into a table that can be joined against other data. Here's a query that returns a table showing every day in January 2019:
select
value asdatefrom
json_each(
dateutil_dates_between('1 Jan 2019', '31 Jan 2019')
)
This example query uses substr(created, 0, 11) to retrieve the date portion of the created column in the facetable demo table, then joins that against the table of days in January to calculate the count of rows created on each day. The LEFT JOIN against days_in_january ensures that days which had no created records are still returned in the results, with a count of 0.
with created_dates as (
select
substr(created, 0, 11) asdatefrom
facetable
),
days_in_january as (
select
value asdatefrom
json_each(
dateutil_dates_between('1 Jan 2019', '31 Jan 2019')
)
)
selectdays_in_january.date,
count(created_dates.date) as total
from
days_in_january
left join created_dates ondays_in_january.date=created_dates.dategroup bydays_in_january.date;
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-dateutil
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
299198369,MDEwOlJlcG9zaXRvcnkyOTkxOTgzNjk=,datasette-import-table,simonw/datasette-import-table,0,9599,https://github.com/simonw/datasette-import-table,Datasette plugin for importing tables from other Datasette instances,0,2020-09-28T05:30:07Z,2022-06-09T15:27:33Z,2022-06-09T16:40:22Z,,20,0,0,Python,1,1,1,1,0,0,0,0,2,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,2,0,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-import-table
[](https://pypi.org/project/datasette-import-table/)
[](https://github.com/simonw/datasette-import-table/releases)
[](https://github.com/simonw/datasette-import-table/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-import-table/blob/main/LICENSE)
Datasette plugin for importing tables from other Datasette instances
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-import-table
## Usage
Visit `/-/import-table` for the interface. Paste in the URL to a table page on another Datasette instance and click the button to import that table.
By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page.
The `import-table` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-import-table
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-import-table
Datasette plugin for importing tables from other Datasette instances
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-import-table
Usage
Visit /-/import-table for the interface. Paste in the URL to a table page on another Datasette instance and click the button to import that table.
By default only the root actor can access the page - so you'll need to run Datasette with the --root option and click on the link shown in the terminal to sign in and access the page.
The import-table permission governs access. You can use permission plugins such as datasette-permissions-sql to grant additional access to the write interface.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-import-table
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
305199661,MDEwOlJlcG9zaXRvcnkzMDUxOTk2NjE=,sphinx-to-sqlite,simonw/sphinx-to-sqlite,0,9599,https://github.com/simonw/sphinx-to-sqlite,Create a SQLite database from Sphinx documentation,0,2020-10-18T21:26:55Z,2020-12-19T05:08:12Z,2020-10-22T04:55:45Z,,9,2,2,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""sqlite"", ""sphinx"", ""datasette-io"", ""datasette-tool""]",0,2,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# sphinx-to-sqlite
[](https://pypi.org/project/sphinx-to-sqlite/)
[](https://github.com/simonw/sphinx-to-sqlite/releases)
[](https://github.com/simonw/sphinx-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/sphinx-to-sqlite/blob/master/LICENSE)
Create a SQLite database from Sphinx documentation.
## Demo
You can see the results of running this tool against the [Datasette documentation](https://docs.datasette.io/) at https://latest-docs.datasette.io/docs/sections
## Installation
Install this tool using `pip`:
$ pip install sphinx-to-sqlite
## Usage
First run `sphinx-build` with the `-b xml` option to create XML files in your `_build/` directory.
Then run:
$ sphinx-to-sqlite docs.db path/to/_build
To build the SQLite database.
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sphinx-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
sphinx-to-sqlite
Create a SQLite database from Sphinx documentation.
First run sphinx-build with the -b xml option to create XML files in your _build/ directory.
Then run:
$ sphinx-to-sqlite docs.db path/to/_build
To build the SQLite database.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sphinx-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
308930118,MDEwOlJlcG9zaXRvcnkzMDg5MzAxMTg=,datasette-edit-templates,simonw/datasette-edit-templates,0,9599,https://github.com/simonw/datasette-edit-templates,Plugin allowing Datasette templates to be edited within Datasette,0,2020-10-31T16:58:29Z,2022-09-14T20:59:49Z,2022-10-27T23:00:04Z,,18,1,1,Python,1,1,1,1,0,0,0,0,3,,[],0,3,1,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-edit-templates
[](https://pypi.org/project/datasette-edit-templates/)
[](https://github.com/simonw/datasette-edit-templates/releases)
[](https://github.com/simonw/datasette-edit-templates/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-edit-templates/blob/main/LICENSE)
Plugin allowing Datasette templates to be edited within Datasette.
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-edit-templates
## Usage
Once installed, sign in as the root user using `datasette mydb.db --root`.
On startup. a `_templates_` table will be created in the database you are running Datasette against.
Use the app menu to navigate to the `/-/edit-templates` page, and edit templates there.
Changes should become visible instantly, and will be persisted to your database.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-edit-templates
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-edit-templates
Plugin allowing Datasette templates to be edited within Datasette.
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-edit-templates
Usage
Once installed, sign in as the root user using datasette mydb.db --root.
On startup. a _templates_ table will be created in the database you are running Datasette against.
Use the app menu to navigate to the /-/edit-templates page, and edit templates there.
Changes should become visible instantly, and will be persisted to your database.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-edit-templates
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,0,
312934001,MDEwOlJlcG9zaXRvcnkzMTI5MzQwMDE=,datasette-indieauth,simonw/datasette-indieauth,0,9599,https://github.com/simonw/datasette-indieauth,Datasette authentication using IndieAuth and RelMeAuth,0,2020-11-15T01:18:21Z,2022-10-25T01:00:43Z,2022-10-25T01:34:47Z,,51,8,8,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""indieauth""]",0,1,8,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,3,,,1,public,0,,0,
315796015,MDEwOlJlcG9zaXRvcnkzMTU3OTYwMTU=,datasette-ripgrep,simonw/datasette-ripgrep,0,9599,https://github.com/simonw/datasette-ripgrep,"Web interface for searching your code using ripgrep, built as a Datasette plugin",0,2020-11-25T01:26:36Z,2022-04-24T03:48:42Z,2022-06-30T22:45:03Z,https://ripgrep.datasette.io,55,58,58,Python,1,1,1,1,0,1,0,0,6,apache-2.0,"[""codesearch"", ""datasette"", ""datasette-io"", ""datasette-plugin"", ""ripgrep""]",1,6,58,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,1,3,"# datasette-ripgrep
[](https://pypi.org/project/datasette-ripgrep/)
[](https://github.com/simonw/datasette-ripgrep/releases)
[](https://github.com/simonw/datasette-ripgrep/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-ripgrep/blob/main/LICENSE)
Web interface for searching your code using [ripgrep](https://github.com/BurntSushi/ripgrep), built as a [Datasette](https://datasette.io/) plugin
For background on this project see [datasette-ripgrep: deploy a regular expression search engine for your source code](https://simonwillison.net/2020/Nov/28/datasette-ripgrep/).
## Demo
Try this plugin out at https://ripgrep.datasette.io/-/ripgrep - where you can run regular expression searches across the source code of Datasette and all of the `datasette-*` plugins belonging to the [simonw GitHub user](https://github.com/simonw).
Some example searches:
- [with.\*AsyncClient](https://ripgrep.datasette.io/-/ripgrep?pattern=with.*AsyncClient) - regular expression search for `with.*AsyncClient`
- [.plugin_config, literal=on](https://ripgrep.datasette.io/-/ripgrep?pattern=.plugin_config\(&literal=on) - a non-regular expression search for `.plugin_config(`
- [with.\*AsyncClient glob=datasette/\*\*](https://ripgrep.datasette.io/-/ripgrep?pattern=with.*AsyncClient&glob=datasette%2F%2A%2A) - search for that pattern only within the `datasette/` top folder
- [""sqlite-utils\["">\] glob=setup.py](https://ripgrep.datasette.io/-/ripgrep?pattern=%22sqlite-utils%5B%22%3E%5D&glob=setup.py) - a regular expression search for packages that depend on either `sqlite-utils` or `sqlite-utils>=some-version`
- [test glob=!\*.html](https://ripgrep.datasette.io/-/ripgrep?pattern=test&glob=%21*.html) - search for the string `test` but exclude results in HTML files
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-ripgrep
The `rg` executable needs to be [installed](https://github.com/BurntSushi/ripgrep/blob/master/README.md#installation) such that it can be run by this tool.
## Usage
This plugin requires configuration: it needs to a `path` setting so that it knows where to run searches.
Create a `metadata.json` file that looks like this:
```json
{
""plugins"": {
""datasette-ripgrep"": {
""path"": ""/path/to/your/files""
}
}
}
```
Now run Datasette using `datasette -m metadata.json`. The plugin will add an interface at `/-/ripgrep` for running searches.
## Plugin configuration
The `""path""` configuration is required. Optional extra configuration options are:
- `time_limit` - floating point number. The `rg` process will be terminated if it takes longer than this limit. The default is one second, `1.0`.
- `max_lines` - integer. The `rg` process will be terminated if it returns more than this number of lines. The default is `2000`.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-ripgrep
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-ripgrep
Web interface for searching your code using ripgrep, built as a Datasette plugin
Now run Datasette using datasette -m metadata.json. The plugin will add an interface at /-/ripgrep for running searches.
Plugin configuration
The ""path"" configuration is required. Optional extra configuration options are:
time_limit - floating point number. The rg process will be terminated if it takes longer than this limit. The default is one second, 1.0.
max_lines - integer. The rg process will be terminated if it returns more than this number of lines. The default is 2000.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-ripgrep
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,0,
327087207,MDEwOlJlcG9zaXRvcnkzMjcwODcyMDc=,datasette-css-properties,simonw/datasette-css-properties,0,9599,https://github.com/simonw/datasette-css-properties,Experimental Datasette output plugin using CSS properties,0,2021-01-05T18:38:07Z,2021-01-12T17:43:11Z,2021-01-07T22:07:19Z,,10,12,12,Python,1,1,1,1,0,0,0,0,1,,"[""datasette-plugin"", ""datasette-io""]",0,1,12,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-css-properties
[](https://pypi.org/project/datasette-css-properties/)
[](https://github.com/simonw/datasette-css-properties/releases)
[](https://github.com/simonw/datasette-css-properties/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-css-properties/blob/main/LICENSE)
Extremely experimental Datasette output plugin using CSS properties, inspired by [Custom Properties as State](https://css-tricks.com/custom-properties-as-state/) by Chris Coyier.
More about this project: [APIs from CSS without JavaScript: the datasette-css-properties plugin](https://simonwillison.net/2021/Jan/7/css-apis-no-javascript/)
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-css-properties
## Usage
Once installed, this plugin adds a `.css` output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property:
Example: https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css produces:
```css
:root {
--pk: '1';
--name: 'The Mystery Spot';
--address: '465 Mystery Spot Road, Santa Cruz, CA 95065';
--latitude: '37.0167';
--longitude: '-122.0024';
}
```
If you link this stylesheet to your page you can then do things like this;
```html
Attraction name:
```
Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the `?_raw=column-name` parameter. This can be passed multiple times.
Consider [this example query](https://latest-with-plugins.datasette.io/github?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B):
```sql
select
'#' || substr(sha, 0, 6) as [custom-bg]
from
commits
order by
author_date desc
limit
1;
```
This returns the first 6 characters of the most recently authored commit with a `#` prefix. The `.css` [output rendered version](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B) looks like this:
```css
:root {
--custom-bg: '#97fb1';
}
```
Adding `?_raw=custom-bg` to the URL produces [this instead](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B&_raw=custom-bg):
```css
:root {
--custom-bg: #97fb1;
}
```
This can then be used as a color value like so:
```css
h1 {
background-color: var(--custom-bg);
}
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-css-properties
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-css-properties
Extremely experimental Datasette output plugin using CSS properties, inspired by Custom Properties as State by Chris Coyier.
Install this plugin in the same environment as Datasette.
$ datasette install datasette-css-properties
Usage
Once installed, this plugin adds a .css output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property:
Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the ?_raw=column-name parameter. This can be passed multiple times.
select'#'|| substr(sha, 0, 6) as [custom-bg]
from
commits
order by
author_date desclimit1;
This returns the first 6 characters of the most recently authored commit with a # prefix. The .cssoutput rendered version looks like this:
:root {
--custom-bg:'#97fb1';
}
Adding ?_raw=custom-bg to the URL produces this instead:
:root {
--custom-bg:#97fb1;
}
This can then be used as a color value like so:
h1 {
background-color:var(--custom-bg);
}
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-css-properties
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
327236119,MDEwOlJlcG9zaXRvcnkzMjcyMzYxMTk=,datasette-export-notebook,simonw/datasette-export-notebook,0,9599,https://github.com/simonw/datasette-export-notebook,Datasette plugin providing instructions for exporting data to Jupyter or Observable,0,2021-01-06T07:37:00Z,2021-12-23T23:19:42Z,2021-12-23T23:19:38Z,,21,10,10,Python,1,1,1,1,0,2,0,0,2,,"[""datasette-io"", ""datasette-plugin""]",2,2,10,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,2,2,"# datasette-export-notebook
[](https://pypi.org/project/datasette-export-notebook/)
[](https://github.com/simonw/datasette-export-notebook/releases)
[](https://github.com/simonw/datasette-export-notebook/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-export-notebook/blob/main/LICENSE)
Datasette plugin providing instructions for exporting data to a [Jupyter](https://jupyter.org/) or [Observable](https://observablehq.com/) notebook.
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-export-notebook
## Usage
Once installed, the plugin will add a `.Notebook` export option to every table and query. Clicking on this link will show instructions for exporting the data to Jupyter or Observable.
## Demo
You can see this plugin in action on the [latest-with-plugins.datasette.io](https://latest-with-plugins.datasette.io/) Datasette instance - for example on [/github/commits.Notebook](https://latest-with-plugins.datasette.io/github/commits.Notebook).
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-export-notebook
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-export-notebook
Datasette plugin providing instructions for exporting data to a Jupyter or Observable notebook.
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-export-notebook
Usage
Once installed, the plugin will add a .Notebook export option to every table and query. Clicking on this link will show instructions for exporting the data to Jupyter or Observable.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-export-notebook
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
331151708,MDEwOlJlcG9zaXRvcnkzMzExNTE3MDg=,datasette-leaflet-freedraw,simonw/datasette-leaflet-freedraw,0,9599,https://github.com/simonw/datasette-leaflet-freedraw,Draw polygons on maps in Datasette,0,2021-01-20T00:55:03Z,2021-12-17T22:07:50Z,2022-02-03T20:24:37Z,,1177,9,9,Python,1,1,1,1,0,2,0,0,2,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""leafletjs""]",2,2,9,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,2,2,"# datasette-leaflet-freedraw
[](https://pypi.org/project/datasette-leaflet-freedraw/)
[](https://github.com/simonw/datasette-leaflet-freedraw/releases)
[](https://github.com/simonw/datasette-leaflet-freedraw/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-leaflet-freedraw/blob/main/LICENSE)
Draw polygons on maps in Datasette
Project background: [Drawing shapes on a map to query a SpatiaLite database](https://simonwillison.net/2021/Jan/24/drawing-shapes-spatialite/).
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-leaflet-freedraw
## Usage
If a table has a SpatiaLite `geometry` column, the plugin will add a map interface to the table page allowing users to draw a shape on the map to find rows with a geometry that intersects that shape.
The plugin can also work with arbitrary SQL queries. There it looks for input fields with a name of `freedraw` or that ends in `_freedraw` and replaces them with a map interface.
The map interface uses the [FreeDraw](https://freedraw.herokuapp.com/) Leaflet plugin.
## Demo
You can try out this plugin to run searches against the GreenInfo Network California Protected Areas Database. Here's [an example query](https://calands.datasettes.com/calands?sql=select%0D%0A++AsGeoJSON%28geometry%29%2C+*%0D%0Afrom%0D%0A++CPAD_2020a_SuperUnits%0D%0Awhere%0D%0A++PARK_NAME+like+%27%25mini%25%27+and%0D%0A++Intersects%28GeomFromGeoJSON%28%3Afreedraw%29%2C+geometry%29+%3D+1%0D%0A++and+CPAD_2020a_SuperUnits.rowid+in+%28%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++SpatialIndex%0D%0A++++where%0D%0A++++++f_table_name+%3D+%27CPAD_2020a_SuperUnits%27%0D%0A++++++and+search_frame+%3D+GeomFromGeoJSON%28%3Afreedraw%29%0D%0A++%29&freedraw=%7B%22type%22%3A%22MultiPolygon%22%2C%22coordinates%22%3A%5B%5B%5B%5B-122.42202758789064%2C37.82280243352759%5D%2C%5B-122.39868164062501%2C37.823887203271454%5D%2C%5B-122.38220214843751%2C37.81846319511331%5D%2C%5B-122.35061645507814%2C37.77071473849611%5D%2C%5B-122.34924316406251%2C37.74465712069939%5D%2C%5B-122.37258911132814%2C37.703380457832374%5D%2C%5B-122.39044189453125%2C37.690340943717715%5D%2C%5B-122.41241455078126%2C37.680559803205135%5D%2C%5B-122.44262695312501%2C37.67295135774715%5D%2C%5B-122.47283935546876%2C37.67295135774715%5D%2C%5B-122.52502441406251%2C37.68382032669382%5D%2C%5B-122.53463745117189%2C37.6892542140253%5D%2C%5B-122.54699707031251%2C37.690340943717715%5D%2C%5B-122.55798339843751%2C37.72945260537781%5D%2C%5B-122.54287719726564%2C37.77831314799672%5D%2C%5B-122.49893188476564%2C37.81303878836991%5D%2C%5B-122.46185302734376%2C37.82822612280363%5D%2C%5B-122.42889404296876%2C37.82822612280363%5D%2C%5B-122.42202758789064%2C37.82280243352759%5D%5D%5D%5D%7D) showing mini parks in San Francisco:
```sql
select
AsGeoJSON(geometry), *
from
CPAD_2020a_SuperUnits
where
PARK_NAME like '%mini%' and
Intersects(GeomFromGeoJSON(:freedraw), geometry) = 1
and CPAD_2020a_SuperUnits.rowid in (
select
rowid
from
SpatialIndex
where
f_table_name = 'CPAD_2020a_SuperUnits'
and search_frame = GeomFromGeoJSON(:freedraw)
)
```

## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-leaflet-freedraw
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
Install this plugin in the same environment as Datasette.
$ datasette install datasette-leaflet-freedraw
Usage
If a table has a SpatiaLite geometry column, the plugin will add a map interface to the table page allowing users to draw a shape on the map to find rows with a geometry that intersects that shape.
The plugin can also work with arbitrary SQL queries. There it looks for input fields with a name of freedraw or that ends in _freedraw and replaces them with a map interface.
The map interface uses the FreeDraw Leaflet plugin.
Demo
You can try out this plugin to run searches against the GreenInfo Network California Protected Areas Database. Here's an example query showing mini parks in San Francisco:
select
AsGeoJSON(geometry), *from
CPAD_2020a_SuperUnits
where
PARK_NAME like'%mini%'and
Intersects(GeomFromGeoJSON(:freedraw), geometry) =1andCPAD_2020a_SuperUnits.rowidin (
select
rowid
from
SpatialIndex
where
f_table_name ='CPAD_2020a_SuperUnits'and search_frame = GeomFromGeoJSON(:freedraw)
)
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-leaflet-freedraw
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
331720824,MDEwOlJlcG9zaXRvcnkzMzE3MjA4MjQ=,datasette-leaflet,simonw/datasette-leaflet,0,9599,https://github.com/simonw/datasette-leaflet,Datasette plugin adding the Leaflet JavaScript library,0,2021-01-21T18:41:19Z,2021-04-20T16:27:35Z,2021-02-01T22:20:28Z,,124,3,3,JavaScript,1,1,1,1,0,0,0,0,2,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,2,3,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-leaflet
[](https://pypi.org/project/datasette-leaflet/)
[](https://github.com/simonw/datasette-leaflet/releases)
[](https://github.com/simonw/datasette-leaflet/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-leaflet/blob/main/LICENSE)
Datasette plugin adding the [Leaflet](https://leafletjs.com/) JavaScript library.
A growing number of Datasette plugins depend on the Leaflet JavaScript mapping library. They each have their own way of loading Leaflet, which could result in loading it multiple times (with multiple versions) if more than one plugin is installed.
This library is intended to solve this problem, by providing a single plugin they can all depend on that loads Leaflet in a reusable way.
Plugins that use this:
- [datasette-leaflet-freedraw](https://datasette.io/plugins/datasette-leaflet-freedraw)
- [datasette-leaflet-geojson](https://datasette.io/plugins/datasette-leaflet-geojson)
- [datasette-cluster-map](https://datasette.io/plugins/datasette-cluster-map)
## Installation
You can install this plugin like so:
datasette install datasette-leaflet
Usually this plugin will be a dependency of other plugins, so it should be installed automatically when you install them.
## Usage
The plugin makes `leaflet.js` and `leaflet.css` available as static files. It provides two custom template variables with the URLs of those two files.
- `{{ datasette_leaflet_url }}` is the URL to the JavaScript
- `{{ datasette_leaflet_css_url }}` is the URL to the CSS
These URLs are also made available as global JavaScript constants:
- `datasette.leaflet.JAVASCRIPT_URL`
- `datasette.leaflet.CSS_URL`
The JavaScript is packaed as a [JavaScript module](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules). You can dynamically import the JavaScript from a custom template like this:
```html+jinja
```
You can load the CSS like this:
```html+jinja
```
Or dynamically like this:
```html+jinja
```
Here's a full example that loads the JavaScript and CSS and renders a map:
```html+jinja
```
","
datasette-leaflet
Datasette plugin adding the Leaflet JavaScript library.
A growing number of Datasette plugins depend on the Leaflet JavaScript mapping library. They each have their own way of loading Leaflet, which could result in loading it multiple times (with multiple versions) if more than one plugin is installed.
This library is intended to solve this problem, by providing a single plugin they can all depend on that loads Leaflet in a reusable way.
<script>
let link =document.createElement('link');link.rel='stylesheet';link.href='{{ datasette_leaflet_css_url }}';document.head.appendChild(link);
</script>
Here's a full example that loads the JavaScript and CSS and renders a map:
Install this plugin in the same environment as Datasette.
$ datasette install datasette-basemap
Usage
This plugin will make a basemap database available containing OpenStreetMap tiles for zoom levels 0-6 in the mbtiles format. It is designed for use with the datasette-tiles tile server plugin.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-basemap
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
335175637,MDEwOlJlcG9zaXRvcnkzMzUxNzU2Mzc=,datasette-tiles,simonw/datasette-tiles,0,9599,https://github.com/simonw/datasette-tiles,"Mapping tile server for Datasette, serving tiles from MBTiles packages",0,2021-02-02T05:11:12Z,2022-03-22T01:52:30Z,2022-03-22T01:52:27Z,https://datasette.io/plugins/datasette-tiles,54,4,4,Python,1,1,1,1,0,4,0,0,8,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""mbtiles""]",4,8,4,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,4,3,"# datasette-tiles
[](https://pypi.org/project/datasette-tiles/)
[](https://github.com/simonw/datasette-tiles/releases)
[](https://github.com/simonw/datasette-tiles/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-tiles/blob/main/LICENSE)
Datasette plugin for serving MBTiles map tiles
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-tiles
## Demo
You can try this plugin out at https://datasette-tiles-demo.datasette.io/-/tiles
## Usage
This plugin scans all database files connected to Datasette to see if any of them are valid MBTiles databases.
It can then serve tiles from those databases at the following URL:
/-/tiles/db-name/zoom/x/y.png
An example map for each database demonstrating the configured minimum and maximum zoom for that database can be found at `/-/tiles/db-name` - this can also be accessed via the table and database action menus for that database.
Visit `/-/tiles` for an index page of attached valid databases.
You can install the [datasette-basemap](https://datasette.io/plugins/datasette-basemap) plugin to get a `basemap` default set of tiles, handling zoom levels 0 to 6 using OpenStreetMap.
### Tile coordinate systems
There are two tile coordinate systems in common use for online maps. The first is used by OpenStreetMap and Google Maps, the second is from a specification called [Tile Map Service](https://en.wikipedia.org/wiki/Tile_Map_Service), or TMS.
Both systems use three components: `z/x/y` - where `z` is the zoom level, `x` is the column and `y` is the row.
The difference is in the way the `y` value is counted. OpenStreetMap has y=0 at the top. TMS has y=0 at the bottom.
An illustrative example: at zoom level 2 the map is divided into 16 total tiles. The OpenStreetMap scheme numbers them like so:
0/0 1/0 2/0 3/0
0/1 1/1 2/1 3/1
0/2 1/2 2/2 3/2
0/3 1/3 2/3 3/3
The TMS scheme looks like this:
0/3 1/3 2/3 3/3
0/2 1/2 2/2 3/2
0/1 1/1 2/1 3/1
0/0 1/0 2/0 3/0
`datasette-tiles` can serve tiles using either of these standards. For the OpenStreetMap / Google Maps 0-at-the-top system, use the following URL:
/-/tiles/database-name/{z}/{x}/{y}.png
For the TMS 0-at-the-bottom system, use this:
/-/tiles-tms/database-name/{z}/{x}/{y}.png
### Configuring a Leaflet tile layer
The following JavaScript will configure a [Leaflet TileLayer](https://leafletjs.com/reference-1.7.1.html#tilelayer) for use with this plugin:
```javascript
var tiles = leaflet.tileLayer(""/-/tiles/basemap/{z}/{x}/{y}.png"", {
minZoom: 0,
maxZoom: 6,
attribution: ""\u00a9 OpenStreetMap contributors""
});
```
### Tile stacks
`datasette-tiles` can be configured to serve tiles from multiple attached MBTiles files, searching each database in order for a tile and falling back to the next in line if that tile is not found.
For a demo of this in action, visit https://datasette-tiles-demo.datasette.io/-/tiles-stack and zoom in on Japan. It should start showing [Stamen's Toner map](maps.stamen.com) of Japan once you get to zoom level 6 and 7.
The `/-/tiles-stack/{z}/{x}/{y}.png` endpoint provides this feature.
If you start Datasette like this:
datasette world.mbtiles country.mbtiles city1.mbtiles city2.mbtiles
Any requests for a tile from the `/-/tiles-stack` path will first check the `city2` database, than `city1`, then `country`, then `world`.
If you have the [datasette-basemap](https://datasette.io/plugins/datasette-basemap) plugin installed it will be given special treatment: the `basemap` database will always be the last database checked for a tile.
Rather than rely on the order in which databases were attached, you can instead configure an explicit order using the `tiles-stack-order` plugin setting. Add the following to your `metadata.json` file:
```json
{
""plugins"": {
""datasette-tiles"": {
""tiles-stack-order"": [""world"", ""country""]
}
}
}
```
You can then run Datasette like this:
datasette -m metadata.json country.mbtiles world.mbtiles
This endpoint serves tiles using the OpenStreetMap / Google Maps coordinate system. To load tiles using the TMS coordinate system use this endpoint instead:
/-/tiles-stack-tms/{z}/{x}/{y}.png
### Retina tiles
Retina (double resolution) tiles are supported by `datasette-tiles` if the MBTiles database file contains 512x512 tile images as opposed to the default of 256x256. JavaScript libraries such as Leaflet will serve these tiles with a fixed 256x256 size, which will cause them to be displayed correctly by capable operating systems.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-tiles
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-tiles
Datasette plugin for serving MBTiles map tiles
Installation
Install this plugin in the same environment as Datasette.
This plugin scans all database files connected to Datasette to see if any of them are valid MBTiles databases.
It can then serve tiles from those databases at the following URL:
/-/tiles/db-name/zoom/x/y.png
An example map for each database demonstrating the configured minimum and maximum zoom for that database can be found at /-/tiles/db-name - this can also be accessed via the table and database action menus for that database.
Visit /-/tiles for an index page of attached valid databases.
You can install the datasette-basemap plugin to get a basemap default set of tiles, handling zoom levels 0 to 6 using OpenStreetMap.
Tile coordinate systems
There are two tile coordinate systems in common use for online maps. The first is used by OpenStreetMap and Google Maps, the second is from a specification called Tile Map Service, or TMS.
Both systems use three components: z/x/y - where z is the zoom level, x is the column and y is the row.
The difference is in the way the y value is counted. OpenStreetMap has y=0 at the top. TMS has y=0 at the bottom.
An illustrative example: at zoom level 2 the map is divided into 16 total tiles. The OpenStreetMap scheme numbers them like so:
datasette-tiles can be configured to serve tiles from multiple attached MBTiles files, searching each database in order for a tile and falling back to the next in line if that tile is not found.
Any requests for a tile from the /-/tiles-stack path will first check the city2 database, than city1, then country, then world.
If you have the datasette-basemap plugin installed it will be given special treatment: the basemap database will always be the last database checked for a tile.
Rather than rely on the order in which databases were attached, you can instead configure an explicit order using the tiles-stack-order plugin setting. Add the following to your metadata.json file:
This endpoint serves tiles using the OpenStreetMap / Google Maps coordinate system. To load tiles using the TMS coordinate system use this endpoint instead:
/-/tiles-stack-tms/{z}/{x}/{y}.png
Retina tiles
Retina (double resolution) tiles are supported by datasette-tiles if the MBTiles database file contains 512x512 tile images as opposed to the default of 256x256. JavaScript libraries such as Leaflet will serve these tiles with a fixed 256x256 size, which will cause them to be displayed correctly by capable operating systems.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-tiles
python3 -mvenv venv
source venv/bin/activate
Download map tiles and store them in an MBTiles database
Installation
Install this tool using pip:
$ pip install download-tiles
Usage
This tool downloads tiles from a specified TMS (Tile Map Server) server for a specified bounding box and range of zoom levels and stores those tiles in a MBTiles SQLite database. It is a command-line wrapper around the Landez Python libary.
Please use this tool responsibly. Consult the usage policies of the tile servers you are interacting with, for example the OpenStreetMap Tile Usage Policy.
Running the following will download zoom levels 0-3 of OpenStreetMap, 85 tiles total, and store them in a SQLite database called world.mbtiles:
download-tiles world.mbtiles
You can customize which tile and zoom levels are downloaded using command options:
--zoom-levels=0-3 or -z=0-3
The different zoom levels to download. Specify a single number, e.g. 15, or a range of numbers e.g. 0-4. Be careful with this setting as you can easily go over the limits requested by the underlying tile server.
--bbox=3.9,-6.3,14.5,10.2 or -b=3.9,-6.3,14.5,10.2
The bounding box to fetch. Should be specified as min-lon,min-lat,max-lon,max-lat. You can use bboxfinder.com to find these for different areas.
--city=london or --country=madagascar
These options can be used instead of --bbox. The city or country specified will be looked up using the Nominatum API and used to derive a bounding box.
--show-bbox
Use this option to output the bounding box that was retrieved for the --city or --country without downloading any tiles.
--name=Name
A name for this tile collection, used for the name field in the metadata table. If not specified a UUID will be used, or if you used --city or --country the name will be set to the full name of that place.
The tile server URL to use. This should include {z} and {x} and {y} specifiers, and can optionally include {s} for subdomains.
The default URL used here is for OpenStreetMap, http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png
--tiles-subdomains=a,b,c
A comma-separated list of subdomains to use for the {s} parameter.
--verbose
Use this option to turn on verbose logging.
--cache-dir=/tmp/tiles
Provide a directory to cache downloaded tiles between runs. This can be useful if you are worried you might not have used the correct options for the bounding box or zoom levels.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd download-tiles
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
342126610,MDEwOlJlcG9zaXRvcnkzNDIxMjY2MTA=,datasette-block,simonw/datasette-block,0,9599,https://github.com/simonw/datasette-block,Block all access to specific path prefixes,0,2021-02-25T04:51:08Z,2021-02-25T08:18:28Z,2021-02-25T05:03:45Z,https://datasette.io/plugins/datasette-block,4,1,1,Python,1,1,1,1,0,0,0,0,0,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-block
[](https://pypi.org/project/datasette-block/)
[](https://github.com/simonw/datasette-block/releases)
[](https://github.com/simonw/datasette-block/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-block/blob/main/LICENSE)
Block all access to specific path prefixes
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-block
## Configuration
Add the following to `metadata.json` to block specific path prefixes:
```json
{
""plugins"": {
""datasette-block"": {
""prefixes"": [""/all/""]
}
}
}
```
This will cause a 403 error to be returned for any path beginning with `/all/`.
This blocking happens as an ASGI wrapper around Datasette.
## Why would you need this?
You almost always would not. I use it with `datasette-ripgrep` to block access to static assets for unauthenticated users.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-block
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-block
Block all access to specific path prefixes
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-block
Configuration
Add the following to metadata.json to block specific path prefixes:
This will cause a 403 error to be returned for any path beginning with /all/.
This blocking happens as an ASGI wrapper around Datasette.
Why would you need this?
You almost always would not. I use it with datasette-ripgrep to block access to static assets for unauthenticated users.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-block
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
346597557,MDEwOlJlcG9zaXRvcnkzNDY1OTc1NTc=,tableau-to-sqlite,simonw/tableau-to-sqlite,0,9599,https://github.com/simonw/tableau-to-sqlite,Fetch data from Tableau into a SQLite database,0,2021-03-11T06:12:02Z,2021-06-10T04:40:44Z,2021-04-29T16:11:03Z,,212,8,8,Python,1,1,1,1,0,2,0,0,2,apache-2.0,"[""datasette-io"", ""datasette-tool""]",2,2,8,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# tableau-to-sqlite
[](https://pypi.org/project/tableau-to-sqlite/)
[](https://github.com/simonw/tableau-to-sqlite/releases)
[](https://github.com/simonw/tableau-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/tableau-to-sqlite/blob/master/LICENSE)
Fetch data from Tableau into a SQLite database. A wrapper around [TableauScraper](https://github.com/bertrandmartel/tableau-scraping/).
## Installation
Install this tool using `pip`:
$ pip install tableau-to-sqlite
## Usage
If you have the URL to a Tableau dashboard like this:
https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations
You can pass that directly to the tool:
tableau-to-sqlite tableau.db \
https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations
This will create a SQLite database called `tableau.db` containing one table for each of the worksheets in that dashboard.
If the dashboard is hosted on https://public.tableau.com/ you can instead provide the view name. This will be two strings separated by a `/` symbol - something like this:
OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment
Now run the tool like this:
tableau-to-sqlite tableau.db \
OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment
## Get the data as JSON or CSV
If you're building a [git scraper](https://simonwillison.net/2020/Oct/9/git-scraping/) you may want to convert the data gathered by this tool to CSV or JSON to check into your repository.
You can do that using [sqlite-utils](https://sqlite-utils.datasette.io/). Install it using `pip`:
pip install sqlite-utils
You can dump out a table as JSON like so:
sqlite-utils rows tableau.db \
'Admin Site and County Map Site No Info' > tableau.json
Or as CSV like this:
sqlite-utils rows tableau.db --csv \
'Admin Site and County Map Site No Info' > tableau.csv
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd tableau-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
tableau-to-sqlite
Fetch data from Tableau into a SQLite database. A wrapper around TableauScraper.
Installation
Install this tool using pip:
$ pip install tableau-to-sqlite
Usage
If you have the URL to a Tableau dashboard like this:
This will create a SQLite database called tableau.db containing one table for each of the worksheets in that dashboard.
If the dashboard is hosted on https://public.tableau.com/ you can instead provide the view name. This will be two strings separated by a / symbol - something like this:
If you're building a git scraper you may want to convert the data gathered by this tool to CSV or JSON to check into your repository.
You can do that using sqlite-utils. Install it using pip:
pip install sqlite-utils
You can dump out a table as JSON like so:
sqlite-utils rows tableau.db \
'Admin Site and County Map Site No Info' > tableau.json
Or as CSV like this:
sqlite-utils rows tableau.db --csv \
'Admin Site and County Map Site No Info' > tableau.csv
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd tableau-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
347263722,MDEwOlJlcG9zaXRvcnkzNDcyNjM3MjI=,django-sql-dashboard,simonw/django-sql-dashboard,0,9599,https://github.com/simonw/django-sql-dashboard,Django app for building dashboards using raw SQL queries,0,2021-03-13T03:38:23Z,2022-04-19T01:13:12Z,2022-04-20T00:27:39Z,https://django-sql-dashboard.datasette.io/,513,335,335,Python,1,1,1,1,0,28,0,0,25,apache-2.0,"[""dashboards"", ""datasette-io"", ""datasette-tool"", ""django"", ""sql""]",28,25,335,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,28,9,"# django-sql-dashboard
[](https://pypi.org/project/django-sql-dashboard/)
[](https://github.com/simonw/django-sql-dashboard/releases)
[](https://github.com/simonw/django-sql-dashboard/actions?query=workflow%3ATest)
[](http://django-sql-dashboard.datasette.io/en/latest/?badge=latest)
[](https://github.com/simonw/django-sql-dashboard/blob/main/LICENSE)
Django SQL Dashboard provides an authenticated interface for executing read-only SQL queries directly against your PostgreSQL database, bringing a useful subset of [Datasette](https://datasette.io/) to Django.
Applications include ad-hoc analysis and debugging, plus the creation of reporting dashboards that can be shared with team members or published online.
See my blog for [more about this project](https://simonwillison.net/2021/May/10/django-sql-dashboard/), including [a video demo](https://www.youtube.com/watch?v=ausrmMZkPEY).
Features include:
- Safely run read-only one or more SQL queries against your database and view the results in your browser
- Bookmark queries and share those links with other members of your team
- Create [saved dashboards](https://django-sql-dashboard.datasette.io/en/latest/saved-dashboards.html) from your queries, with full control over who can view and edit them
- [Named parameters](https://django-sql-dashboard.datasette.io/en/latest/sql.html#sql-parameters) such as `select * from entries where id = %(id)s` will be turned into form fields, allowing quick creation of interactive dashboards
- Produce [bar charts](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#bar-label-bar-quantity), [progress bars](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#total-count-completed-count) and more from SQL queries, with the ability to easily create new [custom dashboard widgets](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#custom-widgets) using the Django template system
- Write SQL queries that safely construct and render [markdown](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#markdown) and [HTML](https://django-sql-dashboard.datasette.io/en/latest/widgets.html#html)
- Export the full results of a SQL query as a downloadable CSV or TSV file, using a combination of Django's [streaming HTTP response](https://docs.djangoproject.com/en/3.2/ref/request-response/#django.http.StreamingHttpResponse) mechanism and PostgreSQL [server-side cursors](https://www.psycopg.org/docs/usage.html#server-side-cursors) to efficiently stream large amounts of data without running out of resources
- Copy and paste the results of SQL queries directly into tools such as Google Sheets or Excel
- Uses Django's authentication system, so dashboard accounts can be granted using Django's Admin tools
## Documentation
Full documentation is at [django-sql-dashboard.datasette.io](https://django-sql-dashboard.datasette.io/)
## Screenshot
## Alternatives
- [django-sql-explorer](https://github.com/groveco/django-sql-explorer) provides a related set of functionality that also works against database backends other than PostgreSQL
","
django-sql-dashboard
Django SQL Dashboard provides an authenticated interface for executing read-only SQL queries directly against your PostgreSQL database, bringing a useful subset of Datasette to Django.
Applications include ad-hoc analysis and debugging, plus the creation of reporting dashboards that can be shared with team members or published online.
Write SQL queries that safely construct and render markdown and HTML
Export the full results of a SQL query as a downloadable CSV or TSV file, using a combination of Django's streaming HTTP response mechanism and PostgreSQL server-side cursors to efficiently stream large amounts of data without running out of resources
Copy and paste the results of SQL queries directly into tools such as Google Sheets or Excel
Uses Django's authentication system, so dashboard accounts can be granted using Django's Admin tools
django-sql-explorer provides a related set of functionality that also works against database backends other than PostgreSQL
",1,public,0,,,
375546675,MDEwOlJlcG9zaXRvcnkzNzU1NDY2NzU=,datasette-placekey,simonw/datasette-placekey,0,9599,https://github.com/simonw/datasette-placekey,SQL functions for working with placekeys,0,2021-06-10T02:31:27Z,2021-06-10T02:33:22Z,2021-06-10T02:32:42Z,https://datasette.io/plugins/datasette-placekey,3,0,0,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io"", ""placekey""]",0,1,0,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-placekey
[](https://pypi.org/project/datasette-placekey/)
[](https://github.com/simonw/datasette-placekey/releases)
[](https://github.com/simonw/datasette-placekey/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-placekey/blob/main/LICENSE)
SQL functions for working with [placekeys](https://www.placekey.io/).
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-placekey
## Usage
The following SQL functions are exposed - [documentation here](https://placekey.github.io/placekey-py/placekey.html#module-placekey.placekey).
```sql
select
geo_to_placekey(33.0896104,129.7900839),
placekey_to_geo('@6nh-nhh-kvf'),
placekey_to_geo_latitude('@6nh-nhh-kvf'),
placekey_to_geo_longitude('@6nh-nhh-kvf'),
placekey_to_h3('@6nh-nhh-kvf'),
h3_to_placekey('8a30d94e4c87fff'),
placekey_to_geojson('@6nh-nhh-kvf'),
placekey_to_wkt('@6nh-nhh-kvf'),
placekey_format_is_valid('@6nh-nhh-kvf');
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-placekey
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-placekey
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
390535500,MDEwOlJlcG9zaXRvcnkzOTA1MzU1MDA=,datasette-remote-metadata,simonw/datasette-remote-metadata,0,9599,https://github.com/simonw/datasette-remote-metadata,Periodically refresh Datasette metadata from a remote URL,0,2021-07-28T23:17:19Z,2021-12-13T19:40:51Z,2021-12-13T19:40:48Z,,8,3,3,Python,1,1,1,1,0,0,0,0,0,apache-2.0,[],0,0,3,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-remote-metadata
[](https://pypi.org/project/datasette-remote-metadata/)
[](https://github.com/simonw/datasette-remote-metadata/releases)
[](https://github.com/simonw/datasette-remote-metadata/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-remote-metadata/blob/main/LICENSE)
Periodically refresh Datasette metadata from a remote URL
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-remote-metadata
## Usage
Add the following to your `metadata.json`:
```json
{
""plugins"": {
""datasette-remote-metadata"": {
""url"": ""https://example.com/remote-metadata.yml""
}
}
}
```
The plugin will fetch the specified metadata from that URL at startup and combine it with any existing metadata. You can use a URL to either a JSON file or a YAML file.
It will periodically refresh that metadata - by default every 30 seconds, unless you specify an alternative `""ttl""` value in the plugin configuration.
## Configuration
Available configuration options are as follows:
- `""url""` - the URL to retrieve remote metadata from. Can link to a JSON or a YAML file.
- `""ttl""` - integer value in secords: how frequently should the script check for fresh metadata. Defaults to 30 seconds.
- `""headers""` - a dictionary of additional request headers to send.
- `""cachebust""` - if true, a random `?0.29508` value will be added to the query string of the remote metadata to bust any intermediary caches.
This example `metadata.json` configuration refreshes every 10 seconds, uses cache busting and sends an `Authorization: Bearer xyz` header with the request:
```json
{
""plugins"": {
""datasette-remote-metadata"": {
""url"": ""https://example.com/remote-metadata.yml"",
""ttl"": 10,
""cachebust"": true,
""headers"": {
""Authorization"": ""Bearer xyz""
}
}
}
}
```
This example if you are using `metadata.yaml` for configuration:
```yaml
plugins:
datasette-remote-metadata:
url: https://example.com/remote-metadata.yml
ttl: 10
cachebust: true
headers:
Authorization: Bearer xyz
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-remote-metadata
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-remote-metadata
Periodically refresh Datasette metadata from a remote URL
Installation
Install this plugin in the same environment as Datasette.
The plugin will fetch the specified metadata from that URL at startup and combine it with any existing metadata. You can use a URL to either a JSON file or a YAML file.
It will periodically refresh that metadata - by default every 30 seconds, unless you specify an alternative ""ttl"" value in the plugin configuration.
Configuration
Available configuration options are as follows:
""url"" - the URL to retrieve remote metadata from. Can link to a JSON or a YAML file.
""ttl"" - integer value in secords: how frequently should the script check for fresh metadata. Defaults to 30 seconds.
""headers"" - a dictionary of additional request headers to send.
""cachebust"" - if true, a random ?0.29508 value will be added to the query string of the remote metadata to bust any intermediary caches.
This example metadata.json configuration refreshes every 10 seconds, uses cache busting and sends an Authorization: Bearer xyz header with the request:
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-remote-metadata
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
393999598,MDEwOlJlcG9zaXRvcnkzOTM5OTk1OTg=,datasette-pyinstrument,simonw/datasette-pyinstrument,0,9599,https://github.com/simonw/datasette-pyinstrument,Use pyinstrument to analyze Datasette page performance,0,2021-08-08T15:33:29Z,2021-08-08T15:50:54Z,2021-08-08T15:50:52Z,,0,0,0,Python,1,1,1,1,0,0,0,0,0,,[],0,0,0,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-pyinstrument
[](https://pypi.org/project/datasette-pyinstrument/)
[](https://github.com/simonw/datasette-pyinstrument/releases)
[](https://github.com/simonw/datasette-pyinstrument/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-pyinstrument/blob/main/LICENSE)
Use pyinstrument to analyze Datasette page performance
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-pyinstrument
## Usage
Once installed, adding `?_pyinstrument=1` to any URL within Datasette will replace the output of that page with the pyinstrument profiler results for it.
## Demo
You can see the output of this plugin at https://latest-with-plugins.datasette.io/fixtures/sortable?_pyinstrument=1
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-pyinstrument
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-pyinstrument
Use pyinstrument to analyze Datasette page performance
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-pyinstrument
Usage
Once installed, adding ?_pyinstrument=1 to any URL within Datasette will replace the output of that page with the pyinstrument profiler results for it.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-pyinstrument
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
394107614,MDEwOlJlcG9zaXRvcnkzOTQxMDc2MTQ=,datasette-query-links,simonw/datasette-query-links,0,9599,https://github.com/simonw/datasette-query-links,Turn SELECT queries returned by a query into links to execute them,0,2021-08-09T01:16:59Z,2021-08-09T04:31:40Z,2021-08-09T02:56:40Z,,7,3,3,Python,1,1,1,1,0,0,0,0,0,,[],0,0,3,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-query-links
[](https://pypi.org/project/datasette-query-links/)
[](https://github.com/simonw/datasette-query-links/releases)
[](https://github.com/simonw/datasette-query-links/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-query-links/blob/main/LICENSE)
Turn SELECT queries returned by a query into links to execute them
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-query-links
## Usage
This is an experimental plugin, requiring Datasette 0.59a1 or higher.
Any SQL query that returns a value that itself looks like a valid SQL query will be converted into a link to execute that SQL query when it is displayed in the Datasette interface.
These links will only show for valid SQL query - if a SQL query would return an error it will not be turned into a link.
## Demo
* [Here's an example query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++%27select+*+from+%5Bfacetable%5D%27+as+query%0D%0Aunion%0D%0Aselect%0D%0A++%27select+sqlite_version()%27%0D%0Aunion%0D%0Aselect%0D%0A++%27select+this+is+invalid+SQL+so+will+not+be+linked%27) showing the plugin in action.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-query-links
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-query-links
Turn SELECT queries returned by a query into links to execute them
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-query-links
Usage
This is an experimental plugin, requiring Datasette 0.59a1 or higher.
Any SQL query that returns a value that itself looks like a valid SQL query will be converted into a link to execute that SQL query when it is displayed in the Datasette interface.
These links will only show for valid SQL query - if a SQL query would return an error it will not be turned into a link.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-query-links
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
395137513,MDEwOlJlcG9zaXRvcnkzOTUxMzc1MTM=,datasette-x-forwarded-host,simonw/datasette-x-forwarded-host,0,9599,https://github.com/simonw/datasette-x-forwarded-host,Treat the X-Forwarded-Host header as the Host header,0,2021-08-11T23:10:44Z,2021-11-12T20:48:43Z,2021-11-12T20:48:41Z,,4,0,0,Python,1,1,1,1,0,0,0,0,0,,"[""datasette-io"", ""datasette-plugin""]",0,0,0,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-x-forwarded-host
[](https://pypi.org/project/datasette-x-forwarded-host/)
[](https://github.com/simonw/datasette-x-forwarded-host/releases)
[](https://github.com/simonw/datasette-x-forwarded-host/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-x-forwarded-host/blob/main/LICENSE)
Treat the X-Forwarded-Host header as the Host header
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-x-forwarded-host
## Usage
Once installed, Datasette will replace the `host` header with the content of the incoming `x-forwarded-host` header.
This helps Datasette generate links to new pages that work when hosted behind a proxy that rewrites the `host` header.
Only use this plugin in deployment environmens where you know the `x-forwarded-host` header can be trusted!
This has been tested on [GitHub Codespaces](https://github.com/features/codespaces) and [GitPod](https://gitpod.io/).
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-x-forwarded-host
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-x-forwarded-host
Treat the X-Forwarded-Host header as the Host header
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-x-forwarded-host
Usage
Once installed, Datasette will replace the host header with the content of the incoming x-forwarded-host header.
This helps Datasette generate links to new pages that work when hosted behind a proxy that rewrites the host header.
Only use this plugin in deployment environmens where you know the x-forwarded-host header can be trusted!
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-x-forwarded-host
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
399308604,MDEwOlJlcG9zaXRvcnkzOTkzMDg2MDQ=,datasette-app,simonw/datasette-app,0,9599,https://github.com/simonw/datasette-app,The Datasette macOS application,0,2021-08-24T02:21:37Z,2022-11-15T18:57:26Z,2022-09-09T04:55:47Z,https://datasette.io/desktop,897,92,92,JavaScript,1,1,1,1,0,6,0,0,32,,"[""datasette""]",6,32,92,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,6,8,,,1,public,0,,0,1
400678317,MDEwOlJlcG9zaXRvcnk0MDA2NzgzMTc=,datasette-verify,simonw/datasette-verify,0,9599,https://github.com/simonw/datasette-verify,Verify that files can be opened by Datasette,0,2021-08-28T01:59:12Z,2021-08-28T02:37:03Z,2021-08-28T02:31:34Z,https://datasette.io/tools/datasette-verify,0,1,1,Python,1,1,1,1,0,0,0,0,0,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,0,1,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-verify
[](https://pypi.org/project/datasette-verify/)
[](https://github.com/simonw/datasette-verify/releases)
[](https://github.com/simonw/datasette-verify/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-verify/blob/main/LICENSE)
Verify that SQLite files can be opened using Datasette
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-verify
This plugin depends on [Datasette 0.59a2](https://github.com/simonw/datasette/releases/tag/0.59a2) or higher, as it uses the [register_commands()](https://docs.datasette.io/en/latest/plugin_hooks.html#plugin-hook-register-commands) plugin hook.
## Usage
To confirm that files can be opened by Datasette, run the following:
datasette verify file1.db file2.db
You can pass one or more file paths.
The command will exit silently with a 0 exit code if the files are all valid SQLite databases that Datasette can open.
It will exit with a 1 exit code and display an error for the first file it finds that is not valid.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-verify
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-verify
Verify that SQLite files can be opened using Datasette
Installation
Install this plugin in the same environment as Datasette.
To confirm that files can be opened by Datasette, run the following:
datasette verify file1.db file2.db
You can pass one or more file paths.
The command will exit silently with a 0 exit code if the files are all valid SQLite databases that Datasette can open.
It will exit with a 1 exit code and display an error for the first file it finds that is not valid.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-verify
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
409678203,R_kgDOGGsxew,datasette-template-request,simonw/datasette-template-request,0,9599,https://github.com/simonw/datasette-template-request,Expose the Datasette request object to custom templates,0,2021-09-23T17:07:00Z,2021-09-23T17:29:08Z,2021-09-23T17:29:36Z,https://datasette.io/plugins/datasette-template-request,0,0,0,Python,1,1,1,1,0,0,0,0,0,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,0,0,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-template-request
[](https://pypi.org/project/datasette-template-request/)
[](https://github.com/simonw/datasette-template-request/releases)
[](https://github.com/simonw/datasette-template-request/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-template-request/blob/main/LICENSE)
Expose the Datasette request object to custom templates
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-template-request
## Usage
Once this plugin is installed, Datasette [custom templates](https://docs.datasette.io/en/stable/custom_templates.html) can use `{{ request }}` to access the current [request object](https://docs.datasette.io/en/stable/internals.html#request-object). For example, to access `?name=Cleo` in the query string a template could use this:
Name: {{ request.args.name }}
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-template-request
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-template-request
Expose the Datasette request object to custom templates
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-template-request
Usage
Once this plugin is installed, Datasette custom templates can use {{ request }} to access the current request object. For example, to access ?name=Cleo in the query string a template could use this:
Name: {{ request.args.name }}
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-template-request
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,,,,,
423589294,R_kgDOGT91rg,datasette-jupyterlite,simonw/datasette-jupyterlite,0,9599,https://github.com/simonw/datasette-jupyterlite,JupyterLite as a Datasette plugin,0,2021-11-01T19:22:51Z,2021-11-05T05:12:17Z,2021-11-05T05:12:33Z,,5,3,3,Python,1,1,1,1,0,0,0,0,1,,[],0,1,3,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-jupyterlite
[](https://pypi.org/project/datasette-jupyterlite/)
[](https://github.com/simonw/datasette-jupyterlite/releases)
[](https://github.com/simonw/datasette-jupyterlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-jupyterlite/blob/main/LICENSE)
[JupyterLite](https://jupyterlite.readthedocs.io/en/latest/) as a Datasette plugin
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-jupyterlite
## Demo
You can try out a demo of the plugin here: https://latest-with-plugins.datasette.io/jupyterlite/
Run this example code in a Pyolite notebook to pull all of the data from the [github/stars](https://latest-with-plugins.datasette.io/github/stars) table into a Pandas DataFrame:
```python
import pandas, pyodide
df = pandas.read_csv(pyodide.open_url(
""https://latest-with-plugins.datasette.io/github/stars.csv?_labels=on&_stream=on&_size=max"")
)
```
## Usage
Once installed, visit `/jupyterlite/` to access JupyterLite served from your Datasette instance.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-jupyterlite
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
Once installed, visit /jupyterlite/ to access JupyterLite served from your Datasette instance.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-jupyterlite
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
423984522,R_kgDOGUV9ig,s3-credentials,simonw/s3-credentials,0,9599,https://github.com/simonw/s3-credentials,A tool for creating credentials for accessing S3 buckets,0,2021-11-02T20:09:50Z,2022-09-05T15:12:46Z,2022-09-15T23:43:10Z,https://s3-credentials.readthedocs.io,204,129,129,Python,1,1,1,1,0,10,0,0,18,apache-2.0,"[""aws"", ""boto3"", ""s3""]",10,18,129,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,10,2,"# s3-credentials
[](https://pypi.org/project/s3-credentials/)
[](https://github.com/simonw/s3-credentials/releases)
[](https://github.com/simonw/s3-credentials/actions?query=workflow%3ATest)
[](https://s3-credentials.readthedocs.org/)
[](https://github.com/simonw/s3-credentials/blob/master/LICENSE)
A tool for creating credentials for accessing S3 buckets
For project background, see [s3-credentials: a tool for creating credentials for S3 buckets](https://simonwillison.net/2021/Nov/3/s3-credentials/) on my blog.
## Installation
pip install s3-credentials
## Basic usage
To create a new S3 bucket and output credentials that can be used with only that bucket:
```
% s3-credentials create my-new-s3-bucket --create-bucket
Created bucket: my-new-s3-bucket
Created user: s3.read-write.my-new-s3-bucket with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess
Attached policy s3.read-write.my-new-s3-bucket to user s3.read-write.my-new-s3-bucket
Created access key for user: s3.read-write.my-new-s3-bucket
{
""UserName"": ""s3.read-write.my-new-s3-bucket"",
""AccessKeyId"": ""AKIAWXFXAIOZOYLZAEW5"",
""Status"": ""Active"",
""SecretAccessKey"": ""..."",
""CreateDate"": ""2021-11-03 01:38:24+00:00""
}
```
The tool can do a lot more than this. See the [documentation](https://s3-credentials.readthedocs.io/) for details.
## Documentation
- [Full documentation](https://s3-credentials.readthedocs.io/)
- [Command help reference](https://s3-credentials.readthedocs.io/en/stable/help.html)
- [Release notes](https://github.com/simonw/s3-credentials/releases)
","
s3-credentials
A tool for creating credentials for accessing S3 buckets
",1,public,0,,0,
427128866,R_kgDOGXV4Ig,git-history,simonw/git-history,0,9599,https://github.com/simonw/git-history,Tools for analyzing Git history using SQLite,0,2021-11-11T20:07:06Z,2022-10-20T20:33:28Z,2022-10-21T23:06:33Z,,117,114,114,Python,1,1,1,1,0,11,0,0,20,apache-2.0,"[""git"", ""sqlite""]",11,20,114,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,11,3,"# git-history
[](https://pypi.org/project/git-history/)
[](https://github.com/simonw/git-history/releases)
[](https://github.com/simonw/git-history/actions?query=workflow%3ATest)
[](https://github.com/simonw/git-history/blob/master/LICENSE)
Tools for analyzing Git history using SQLite
For background on this project see [git-history: a tool for analyzing scraped data collected using Git and SQLite](https://simonwillison.net/2021/Dec/7/git-history/).
[Measuring traffic during the Half Moon Bay Pumpkin Festival](https://simonwillison.net/2022/Oct/19/measuring-traffic/) describes a project using this tool in detail.
## Installation
Install this tool using `pip`:
$ pip install git-history
## Demos
[git-history-demos.datasette.io](http://git-history-demos.datasette.io/) hosts three example databases created using this tool:
- [pge-outages](https://git-history-demos.datasette.io/pge-outages) shows a history of PG&E (the electricity supplier) [outages](https://pgealerts.alerts.pge.com/outagecenter/), using data collected in [simonw/pge-outages](https://github.com/simonw/pge-outages) converted using [pge-outages.sh](https://github.com/simonw/git-history/blob/main/demos/pge-outages.sh)
- [ca-fires](https://git-history-demos.datasette.io/ca-fires) shows a history of fires in California reported on [fire.ca.gov/incidents](https://www.fire.ca.gov/incidents/), from data in [simonw/ca-fires-history](https://github.com/simonw/ca-fires-history) converted using [ca-fires.sh](https://github.com/simonw/git-history/blob/main/demos/ca-fires.sh)
- [sf-bay-511](https://git-history-demos.datasette.io/sf-bay-511) has records of San Francisco Bay Area traffic and transit incident data from [511.org](https://511.org/), collected in [dbreunig/511-events-history](https://github.com/dbreunig/511-events-history) converted using [sf-bay-511.sh](https://github.com/simonw/git-history/blob/main/demos/sf-bay-511.sh)
The demos are deployed using [Datasette](https://datasette.io/) on [Google Cloud Run](https://cloud.google.com/run/) by [this GitHub Actions workflow](https://github.com/simonw/git-history/blob/main/.github/workflows/deploy-demos.yml).
## Usage
This tool can be run against a Git repository that holds a file that contains JSON, CSV/TSV or some other format and which has multiple versions tracked in the Git history. Read [Git scraping: track changes over time by scraping to a Git repository](https://simonwillison.net/2020/Oct/9/git-scraping/) to understand how you might create such a repository.
The `file` command analyzes the history of an individual file within the repository, and generates a SQLite database table that represents the different versions of that file over time.
The file is assumed to contain multiple objects - for example, the results of scraping an electricity outage map or a CSV file full of records.
Assuming you have a file called `incidents.json` that is a JSON array of objects, with multiple versions of that file recorded in a repository. Each version of that file might look something like this:
```json
[
{
""IncidentID"": ""abc123"",
""Location"": ""Corner of 4th and Vermont"",
""Type"": ""fire""
},
{
""IncidentID"": ""cde448"",
""Location"": ""555 West Example Drive"",
""Type"": ""medical""
}
]
```
Change directory into the GitHub repository in question and run the following:
git-history file incidents.db incidents.json
This will create a new SQLite database in the `incidents.db` file with three tables:
- `commits` containing a row for every commit, with a `hash` column, the `commit_at` date and a foreign key to a `namespace`.
- `item` containing a row for every item in every version of the `filename.json` file - with an extra `_commit` column that is a foreign key back to the `commit` table.
- `namespaces` containing a single row. This allows you to build multiple tables for different files, using the `--namespace` option described below.
The database schema for this example will look like this:
```sql
CREATE TABLE [namespaces] (
[id] INTEGER PRIMARY KEY,
[name] TEXT
);
CREATE UNIQUE INDEX [idx_namespaces_name]
ON [namespaces] ([name]);
CREATE TABLE [commits] (
[id] INTEGER PRIMARY KEY,
[namespace] INTEGER REFERENCES [namespaces]([id]),
[hash] TEXT,
[commit_at] TEXT
);
CREATE UNIQUE INDEX [idx_commits_namespace_hash]
ON [commits] ([namespace], [hash]);
CREATE TABLE [item] (
[IncidentID] TEXT,
[Location] TEXT,
[Type] TEXT
);
```
If you have 10 historic versions of the `incidents.json` file and each one contains 30 incidents, you will end up with 10 * 30 = 300 rows in your `item` table.
### Track the history of individual items using IDs
If your objects have a unique identifier - or multiple columns that together form a unique identifier - you can use the `--id` option to de-duplicate and track changes to each of those items over time.
This provides a much more interesting way to apply this tool.
If there is a unique identifier column called `IncidentID` you could run the following:
git-history file incidents.db incidents.json --id IncidentID
The database schema used here is very different from the one used without the `--id` option.
If you have already imported history, the command will skip any commits that it has seen already and just process new ones. This means that even though an initial import could be slow subsequent imports should run a lot faster.
This command will create six tables - `commits`, `item`, `item_version`, `columns`, `item_changed` and `namespaces`.
Here's the full schema:
```sql
CREATE TABLE [namespaces] (
[id] INTEGER PRIMARY KEY,
[name] TEXT
);
CREATE UNIQUE INDEX [idx_namespaces_name]
ON [namespaces] ([name]);
CREATE TABLE [commits] (
[id] INTEGER PRIMARY KEY,
[namespace] INTEGER REFERENCES [namespaces]([id]),
[hash] TEXT,
[commit_at] TEXT
);
CREATE UNIQUE INDEX [idx_commits_namespace_hash]
ON [commits] ([namespace], [hash]);
CREATE TABLE [item] (
[_id] INTEGER PRIMARY KEY,
[_item_id] TEXT
, [IncidentID] TEXT, [Location] TEXT, [Type] TEXT, [_commit] INTEGER);
CREATE UNIQUE INDEX [idx_item__item_id]
ON [item] ([_item_id]);
CREATE TABLE [item_version] (
[_id] INTEGER PRIMARY KEY,
[_item] INTEGER REFERENCES [item]([_id]),
[_version] INTEGER,
[_commit] INTEGER REFERENCES [commits]([id]),
[IncidentID] TEXT,
[Location] TEXT,
[Type] TEXT,
[_item_full_hash] TEXT
);
CREATE TABLE [columns] (
[id] INTEGER PRIMARY KEY,
[namespace] INTEGER REFERENCES [namespaces]([id]),
[name] TEXT
);
CREATE UNIQUE INDEX [idx_columns_namespace_name]
ON [columns] ([namespace], [name]);
CREATE TABLE [item_changed] (
[item_version] INTEGER REFERENCES [item_version]([_id]),
[column] INTEGER REFERENCES [columns]([id]),
PRIMARY KEY ([item_version], [column])
);
CREATE VIEW item_version_detail AS select
commits.commit_at as _commit_at,
commits.hash as _commit_hash,
item_version.*,
(
select json_group_array(name) from columns
where id in (
select column from item_changed
where item_version = item_version._id
)
) as _changed_columns
from item_version
join commits on commits.id = item_version._commit;
CREATE INDEX [idx_item_version__item]
ON [item_version] ([_item]);
```
#### item table
The `item` table will contain the most recent version of each row, de-duplicated by ID, plus the following additional columns:
- `_id` - a numeric integer primary key, used as a foreign key from the `item_version` table.
- `_item_id` - a hash of the values of the columns specified using the `--id` option to the command. This is used for de-duplication when processing new versions.
- `_commit` - a foreign key to the `commit` table, representing the most recent commit to modify this item.
#### item_version table
The `item_version` table will contain a row for each captured differing version of that item, plus the following columns:
- `_id` - a numeric ID for the item version record.
- `_item` - a foreign key to the `item` table.
- `_version` - the numeric version number, starting at 1 and incrementing for each captured version.
- `_commit` - a foreign key to the `commit` table.
- `_item_full_hash` - a hash of this version of the item. This is used internally by the tool to identify items that have changed between commits.
The other columns in this table represent columns in the original data that have changed since the previous version. If the value has not changed, it will be represented by a `null`.
If a value was previously set but has been changed back to `null` it will still be represented as `null` in the `item_version` row. You can identify these using the `item_changed` many-to-many table described below.
You can use the `--full-versions` option to store full copies of the item at each version, rather than just storing the columns that have changed.
#### item_version_detail view
This SQL view joins `item_version` against `commits` to add three further columns: `_commit_at` with the date of the commit, and `_commit_hash` with the Git commit hash.
#### item_changed
This many-to-many table indicates exactly which columns were changed in an `item_version`.
- `item_version` is a foreign key to a row in the `item_version` table.
- `column` is a foreign key to a row in the `columns` table.
This table with have the largest number of rows, which is why it stores just two integers in order to save space.
#### columns
The `columns` table stores column names. It is referenced by `item_changed`.
- `id` - an integer ID.
- `name` - the name of the column.
- `namespace` - a foreign key to `namespaces`, for if multiple file histories are sharing the same database.
#### Reserved column names
Note that `_id`, `_item_full_hash`, `_item`, `_item_id`, `_version`, `_commit`, `_item_id`, `_commit_at`, `_commit_hash`, `_changed_columns`, `rowid` are considered reserved column names for the purposes of this tool.
If your data contains any of these they will be renamed to add a trailing underscore, for example `_id_`, `_item_`, `_version_`, to avoid clashing with the reserved columns.
If you have a column with a name such as `_commit_` it will be renamed too, adding an additional trailing underscore, so `_commit_` becomes `_commit__` and `_commit__` becomes `_commit___`.
### Additional options
- `--repo DIRECTORY` - the path to the Git repository, if it is not the current working directory.
- `--branch TEXT` - the Git branch to analyze - defaults to `main`.
- `--id TEXT` - as described above: pass one or more columns that uniquely identify a record, so that changes to that record can be calculated over time.
- `--full-versions` - instead of recording just the columns that have changed in the `item_version` table record a full copy of each version of theh item.
- `--ignore TEXT` - one or more columns to ignore - they will not be included in the resulting database.
- `--csv` - treat the data is CSV or TSV rather than JSON, and attempt to guess the correct dialect
- `--dialect` - use a spcific CSV dialect. Options are `excel`, `excel-tab` and `unix` - see [the Python CSV documentation](https://docs.python.org/3/library/csv.html#csv.excel) for details.
- `--skip TEXT` - one or more full Git commit hashes that should be skipped. You can use this if some of the data in your revision history is corrupted in a way that prevents this tool from working.
- `--start-at TEXT` - skip commits prior to the specified commit hash.
- `--start-after TEXT` - skip commits up to and including the specified commit hash, then start processing from the following commit.
- `--convert TEXT` - custom Python code for a conversion, described below.
- `--import TEXT` - additional Python modules to import for `--convert`.
- `--ignore-duplicate-ids` - if a single version of a file has the same ID in it more than once, the tool will exit with an error. Use this option to ignore this and instead pick just the first of the two duplicates.
- `--namespace TEXT` - use this if you wish to include the history of multiple different files in the same database. The default is `item` but you can set it to something else, which will produce tables with names like `yournamespace` and `yournamespace_version`.
- `--wal` - Enable WAL mode on the created database file. Use this if you plan to run queries against the database while `git-history` is creating it.
- `--silent` - don't show the progress bar.
### CSV and TSV data
If the data in your repository is a CSV or TSV file you can process it by adding the `--csv` option. This will attempt to detect which delimiter is used by the file, so the same option works for both comma- and tab-separated values.
git-history file trees.db trees.csv --id TreeID
You can also specify the CSV dialect using the `--dialect` option.
### Custom conversions using --convert
If your data is not already either CSV/TSV or a flat JSON array, you can reshape it using the `--convert` option.
The format needed by this tool is an array of dictionaries, as demonstrated by the `incidents.json` example above.
If your data does not fit this shape, you can provide a snippet of Python code to converts the on-disk content of each stored file into a Python list of dictionaries.
For example, if your stored files each look like this:
```json
{
""incidents"": [
{
""id"": ""552"",
""name"": ""Hawthorne Fire"",
""engines"": 3
},
{
""id"": ""556"",
""name"": ""Merlin Fire"",
""engines"": 1
}
]
}
```
You could use the following Python snippet to convert them to the required format:
```python
json.loads(content)[""incidents""]
```
(The `json` module is exposed to your custom function by default.)
You would then run the tool like this:
git-history file database.db incidents.json \
--id id \
--convert 'json.loads(content)[""incidents""]'
The `content` variable is always a `bytes` object representing the content of the file at a specific moment in the repository's history.
You can import additional modules using `--import`. This example shows how you could read a CSV file that uses `;` as the delimiter:
git-history file trees.db ../sf-tree-history/Street_Tree_List.csv \
--repo ../sf-tree-history \
--import csv \
--import io \
--convert '
fp = io.StringIO(content.decode(""utf-8""))
return list(csv.DictReader(fp, delimiter="";""))
' \
--id TreeID
You can import nested modules such as [ElementTree](https://docs.python.org/3/library/xml.etree.elementtree.html) using `--import xml.etree.ElementTree`, then refer to them in your function body as `xml.etree.ElementTree`. For example, if your tracked data was in an `items.xml` file that looked like this:
```xml
```
You could load it using the following `--convert` script:
```
git-history file items.xml --convert '
tree = xml.etree.ElementTree.fromstring(content)
return [el.attrib for el in tree.iter(""item"")]
' --import xml.etree.ElementTree --id id
```
If your Python code spans more than one line it needs to include a `return` statement.
You can also use Python generators in your `--convert` code, for example:
git-history file stats.db package-stats/stats.json \
--repo package-stats \
--convert '
data = json.loads(content)
for key, counts in data.items():
for date, count in counts.items():
yield {
""package"": key,
""date"": date,
""count"": count
}
' --id package --id date
This conversion function expects data that looks like this:
```json
{
""airtable-export"": {
""2021-05-18"": 66,
""2021-05-19"": 60,
""2021-05-20"": 87
}
}
```
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd git-history
python -m venv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
To update the schema examples in this README file:
cog -r README.md
","
This tool can be run against a Git repository that holds a file that contains JSON, CSV/TSV or some other format and which has multiple versions tracked in the Git history. Read Git scraping: track changes over time by scraping to a Git repository to understand how you might create such a repository.
The file command analyzes the history of an individual file within the repository, and generates a SQLite database table that represents the different versions of that file over time.
The file is assumed to contain multiple objects - for example, the results of scraping an electricity outage map or a CSV file full of records.
Assuming you have a file called incidents.json that is a JSON array of objects, with multiple versions of that file recorded in a repository. Each version of that file might look something like this:
[
{
""IncidentID"": ""abc123"",
""Location"": ""Corner of 4th and Vermont"",
""Type"": ""fire""
},
{
""IncidentID"": ""cde448"",
""Location"": ""555 West Example Drive"",
""Type"": ""medical""
}
]
Change directory into the GitHub repository in question and run the following:
git-history file incidents.db incidents.json
This will create a new SQLite database in the incidents.db file with three tables:
commits containing a row for every commit, with a hash column, the commit_at date and a foreign key to a namespace.
item containing a row for every item in every version of the filename.json file - with an extra _commit column that is a foreign key back to the commit table.
namespaces containing a single row. This allows you to build multiple tables for different files, using the --namespace option described below.
The database schema for this example will look like this:
CREATE TABLE [namespaces] (
[id] INTEGERPRIMARY KEY,
[name] TEXT
);
CREATE UNIQUE INDEX [idx_namespaces_name]
ON [namespaces] ([name]);
CREATE TABLE [commits] (
[id] INTEGERPRIMARY KEY,
[namespace] INTEGERREFERENCES [namespaces]([id]),
[hash] TEXT,
[commit_at] TEXT
);
CREATE UNIQUE INDEX [idx_commits_namespace_hash]
ON [commits] ([namespace], [hash]);
CREATE TABLE [item] (
[IncidentID] TEXT,
[Location] TEXT,
[Type] TEXT
);
If you have 10 historic versions of the incidents.json file and each one contains 30 incidents, you will end up with 10 * 30 = 300 rows in your item table.
Track the history of individual items using IDs
If your objects have a unique identifier - or multiple columns that together form a unique identifier - you can use the --id option to de-duplicate and track changes to each of those items over time.
This provides a much more interesting way to apply this tool.
If there is a unique identifier column called IncidentID you could run the following:
The database schema used here is very different from the one used without the --id option.
If you have already imported history, the command will skip any commits that it has seen already and just process new ones. This means that even though an initial import could be slow subsequent imports should run a lot faster.
This command will create six tables - commits, item, item_version, columns, item_changed and namespaces.
Here's the full schema:
CREATE TABLE [namespaces] (
[id] INTEGERPRIMARY KEY,
[name] TEXT
);
CREATE UNIQUE INDEX [idx_namespaces_name]
ON [namespaces] ([name]);
CREATE TABLE [commits] (
[id] INTEGERPRIMARY KEY,
[namespace] INTEGERREFERENCES [namespaces]([id]),
[hash] TEXT,
[commit_at] TEXT
);
CREATE UNIQUE INDEX [idx_commits_namespace_hash]
ON [commits] ([namespace], [hash]);
CREATE TABLE [item] (
[_id] INTEGERPRIMARY KEY,
[_item_id] TEXT
, [IncidentID] TEXT, [Location] TEXT, [Type] TEXT, [_commit] INTEGER);
CREATE UNIQUE INDEX [idx_item__item_id]
ON [item] ([_item_id]);
CREATE TABLE [item_version] (
[_id] INTEGERPRIMARY KEY,
[_item] INTEGERREFERENCES [item]([_id]),
[_version] INTEGER,
[_commit] INTEGERREFERENCES [commits]([id]),
[IncidentID] TEXT,
[Location] TEXT,
[Type] TEXT,
[_item_full_hash] TEXT
);
CREATE TABLE [columns] (
[id] INTEGERPRIMARY KEY,
[namespace] INTEGERREFERENCES [namespaces]([id]),
[name] TEXT
);
CREATE UNIQUE INDEX [idx_columns_namespace_name]
ON [columns] ([namespace], [name]);
CREATE TABLE [item_changed] (
[item_version] INTEGERREFERENCES [item_version]([_id]),
[column] INTEGERREFERENCES [columns]([id]),
PRIMARY KEY ([item_version], [column])
);
CREATEVIEWitem_version_detailASselectcommits.commit_atas _commit_at,
commits.hashas _commit_hash,
item_version.*,
(
select json_group_array(name) from columns
where id in (
select column from item_changed
where item_version =item_version._id
)
) as _changed_columns
from item_version
join commits oncommits.id=item_version._commit;
CREATE INDEX [idx_item_version__item]
ON [item_version] ([_item]);
item table
The item table will contain the most recent version of each row, de-duplicated by ID, plus the following additional columns:
_id - a numeric integer primary key, used as a foreign key from the item_version table.
_item_id - a hash of the values of the columns specified using the --id option to the command. This is used for de-duplication when processing new versions.
_commit - a foreign key to the commit table, representing the most recent commit to modify this item.
item_version table
The item_version table will contain a row for each captured differing version of that item, plus the following columns:
_id - a numeric ID for the item version record.
_item - a foreign key to the item table.
_version - the numeric version number, starting at 1 and incrementing for each captured version.
_commit - a foreign key to the commit table.
_item_full_hash - a hash of this version of the item. This is used internally by the tool to identify items that have changed between commits.
The other columns in this table represent columns in the original data that have changed since the previous version. If the value has not changed, it will be represented by a null.
If a value was previously set but has been changed back to null it will still be represented as null in the item_version row. You can identify these using the item_changed many-to-many table described below.
You can use the --full-versions option to store full copies of the item at each version, rather than just storing the columns that have changed.
item_version_detail view
This SQL view joins item_version against commits to add three further columns: _commit_at with the date of the commit, and _commit_hash with the Git commit hash.
item_changed
This many-to-many table indicates exactly which columns were changed in an item_version.
item_version is a foreign key to a row in the item_version table.
column is a foreign key to a row in the columns table.
This table with have the largest number of rows, which is why it stores just two integers in order to save space.
columns
The columns table stores column names. It is referenced by item_changed.
id - an integer ID.
name - the name of the column.
namespace - a foreign key to namespaces, for if multiple file histories are sharing the same database.
Reserved column names
Note that _id, _item_full_hash, _item, _item_id, _version, _commit, _item_id, _commit_at, _commit_hash, _changed_columns, rowid are considered reserved column names for the purposes of this tool.
If your data contains any of these they will be renamed to add a trailing underscore, for example _id_, _item_, _version_, to avoid clashing with the reserved columns.
If you have a column with a name such as _commit_ it will be renamed too, adding an additional trailing underscore, so _commit_ becomes _commit__ and _commit__ becomes _commit___.
Additional options
--repo DIRECTORY - the path to the Git repository, if it is not the current working directory.
--branch TEXT - the Git branch to analyze - defaults to main.
--id TEXT - as described above: pass one or more columns that uniquely identify a record, so that changes to that record can be calculated over time.
--full-versions - instead of recording just the columns that have changed in the item_version table record a full copy of each version of theh item.
--ignore TEXT - one or more columns to ignore - they will not be included in the resulting database.
--csv - treat the data is CSV or TSV rather than JSON, and attempt to guess the correct dialect
--dialect - use a spcific CSV dialect. Options are excel, excel-tab and unix - see the Python CSV documentation for details.
--skip TEXT - one or more full Git commit hashes that should be skipped. You can use this if some of the data in your revision history is corrupted in a way that prevents this tool from working.
--start-at TEXT - skip commits prior to the specified commit hash.
--start-after TEXT - skip commits up to and including the specified commit hash, then start processing from the following commit.
--convert TEXT - custom Python code for a conversion, described below.
--import TEXT - additional Python modules to import for --convert.
--ignore-duplicate-ids - if a single version of a file has the same ID in it more than once, the tool will exit with an error. Use this option to ignore this and instead pick just the first of the two duplicates.
--namespace TEXT - use this if you wish to include the history of multiple different files in the same database. The default is item but you can set it to something else, which will produce tables with names like yournamespace and yournamespace_version.
--wal - Enable WAL mode on the created database file. Use this if you plan to run queries against the database while git-history is creating it.
--silent - don't show the progress bar.
CSV and TSV data
If the data in your repository is a CSV or TSV file you can process it by adding the --csv option. This will attempt to detect which delimiter is used by the file, so the same option works for both comma- and tab-separated values.
git-history file trees.db trees.csv --id TreeID
You can also specify the CSV dialect using the --dialect option.
Custom conversions using --convert
If your data is not already either CSV/TSV or a flat JSON array, you can reshape it using the --convert option.
The format needed by this tool is an array of dictionaries, as demonstrated by the incidents.json example above.
If your data does not fit this shape, you can provide a snippet of Python code to converts the on-disk content of each stored file into a Python list of dictionaries.
For example, if your stored files each look like this:
You can import nested modules such as ElementTree using --import xml.etree.ElementTree, then refer to them in your function body as xml.etree.ElementTree. For example, if your tracked data was in an items.xml file that looked like this:
You could load it using the following --convert script:
git-history file items.xml --convert '
tree = xml.etree.ElementTree.fromstring(content)
return [el.attrib for el in tree.iter(""item"")]
' --import xml.etree.ElementTree --id id
If your Python code spans more than one line it needs to include a return statement.
You can also use Python generators in your --convert code, for example:
git-history file stats.db package-stats/stats.json \
--repo package-stats \
--convert '
data = json.loads(content)
for key, counts in data.items():
for date, count in counts.items():
yield {
""package"": key,
""date"": date,
""count"": count
}
' --id package --id date
This conversion function expects data that looks like this:
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd git-history
python -m venv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
To update the schema examples in this README file:
cog -r README.md
",1,public,0,"{""id"": 401177473, ""node_id"": ""MDEwOlJlcG9zaXRvcnk0MDExNzc0NzM="", ""name"": ""click-app-template-repository"", ""full_name"": ""simonw/click-app-template-repository"", ""private"": false, ""owner"": {""login"": ""simonw"", ""id"": 9599, ""node_id"": ""MDQ6VXNlcjk1OTk="", ""avatar_url"": ""https://avatars.githubusercontent.com/u/9599?v=4"", ""gravatar_id"": """", ""url"": ""https://api.github.com/users/simonw"", ""html_url"": ""https://github.com/simonw"", ""followers_url"": ""https://api.github.com/users/simonw/followers"", ""following_url"": ""https://api.github.com/users/simonw/following{/other_user}"", ""gists_url"": ""https://api.github.com/users/simonw/gists{/gist_id}"", ""starred_url"": ""https://api.github.com/users/simonw/starred{/owner}{/repo}"", ""subscriptions_url"": ""https://api.github.com/users/simonw/subscriptions"", ""organizations_url"": ""https://api.github.com/users/simonw/orgs"", ""repos_url"": ""https://api.github.com/users/simonw/repos"", ""events_url"": ""https://api.github.com/users/simonw/events{/privacy}"", ""received_events_url"": ""https://api.github.com/users/simonw/received_events"", ""type"": ""User"", ""site_admin"": false}, ""html_url"": ""https://github.com/simonw/click-app-template-repository"", ""description"": ""GitHub template repository for creating new Python Click CLI tools, using the simonw/click-app cookiecutter template"", ""fork"": false, ""url"": ""https://api.github.com/repos/simonw/click-app-template-repository"", ""forks_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/forks"", ""keys_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/keys{/key_id}"", ""collaborators_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/collaborators{/collaborator}"", ""teams_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/teams"", ""hooks_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/hooks"", ""issue_events_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/issues/events{/number}"", ""events_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/events"", ""assignees_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/assignees{/user}"", ""branches_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/branches{/branch}"", ""tags_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/tags"", ""blobs_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/git/blobs{/sha}"", ""git_tags_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/git/tags{/sha}"", ""git_refs_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/git/refs{/sha}"", ""trees_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/git/trees{/sha}"", ""statuses_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/statuses/{sha}"", ""languages_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/languages"", ""stargazers_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/stargazers"", ""contributors_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/contributors"", ""subscribers_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/subscribers"", ""subscription_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/subscription"", ""commits_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/commits{/sha}"", ""git_commits_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/git/commits{/sha}"", ""comments_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/comments{/number}"", ""issue_comment_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/issues/comments{/number}"", ""contents_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/contents/{+path}"", ""compare_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/compare/{base}...{head}"", ""merges_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/merges"", ""archive_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/{archive_format}{/ref}"", ""downloads_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/downloads"", ""issues_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/issues{/number}"", ""pulls_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/pulls{/number}"", ""milestones_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/milestones{/number}"", ""notifications_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/notifications{?since,all,participating}"", ""labels_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/labels{/name}"", ""releases_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/releases{/id}"", ""deployments_url"": ""https://api.github.com/repos/simonw/click-app-template-repository/deployments"", ""created_at"": ""2021-08-30T01:03:34Z"", ""updated_at"": ""2022-07-17T02:01:39Z"", ""pushed_at"": ""2022-03-16T23:35:31Z"", ""git_url"": ""git://github.com/simonw/click-app-template-repository.git"", ""ssh_url"": ""git@github.com:simonw/click-app-template-repository.git"", ""clone_url"": ""https://github.com/simonw/click-app-template-repository.git"", ""svn_url"": ""https://github.com/simonw/click-app-template-repository"", ""homepage"": """", ""size"": 12, ""stargazers_count"": 8, ""watchers_count"": 8, ""language"": null, ""has_issues"": true, ""has_projects"": true, ""has_downloads"": true, ""has_wiki"": true, ""has_pages"": false, ""forks_count"": 0, ""mirror_url"": null, ""archived"": false, ""disabled"": false, ""open_issues_count"": 0, ""license"": null, ""allow_forking"": true, ""is_template"": true, ""web_commit_signoff_required"": false, ""topics"": [], ""visibility"": ""public"", ""forks"": 0, ""open_issues"": 0, ""watchers"": 8, ""default_branch"": ""main"", ""permissions"": {""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}, ""temp_clone_token"": """"}",0,
430224716,R_kgDOGaS1TA,datasette-redirect-to-https,simonw/datasette-redirect-to-https,0,9599,https://github.com/simonw/datasette-redirect-to-https,Datasette plugin that redirects all non-https requests to https,0,2021-11-20T22:43:33Z,2022-04-24T03:48:01Z,2022-07-07T17:38:32Z,,12,1,1,Python,1,1,1,1,0,0,0,0,0,,"[""asgi"", ""datasette"", ""datasette-io"", ""datasette-plugin""]",0,0,1,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-redirect-to-https
[](https://pypi.org/project/datasette-redirect-to-https/)
[](https://github.com/simonw/datasette-redirect-to-https/releases)
[](https://github.com/simonw/datasette-redirect-to-https/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-redirect-to-https/blob/main/LICENSE)
Datasette plugin that redirects all non-https requests to https
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-redirect-to-https
## Usage
Once installed, incoming GET requests to the `http://` protocol will be 301 redirected to the `https://` equivalent page.
HTTP verbs other than GET will get a 405 Method Not Allowed HTTP error.
## Configuration
Some hosting providers handle HTTPS for you, passing requests back to your application server over HTTP.
For this plugin to work correctly, it needs to detect that the original incoming request came in over HTTP.
Hosting providers like this often set an additional HTTP header such as `x-forwarded-proto: http` identifying the original protocol.
You can configure `datasette-redirect-to-https` to respect this header using the following plugin configuration in `metadata.json`:
```json
{
""plugins"": {
""datasette-redirect-to-https"": {
""if_headers"": {
""x-forwarded-proto"": ""http""
}
}
}
}
```
The above example will redirect to `https://` if the incoming request has a `x-forwarded-proto: http` request header.
If multiple `if_headers` are listed, the redirect will occur if any of them match.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-redirect-to-https
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-redirect-to-https
Datasette plugin that redirects all non-https requests to https
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-redirect-to-https
Usage
Once installed, incoming GET requests to the http:// protocol will be 301 redirected to the https:// equivalent page.
HTTP verbs other than GET will get a 405 Method Not Allowed HTTP error.
Configuration
Some hosting providers handle HTTPS for you, passing requests back to your application server over HTTP.
For this plugin to work correctly, it needs to detect that the original incoming request came in over HTTP.
Hosting providers like this often set an additional HTTP header such as x-forwarded-proto: http identifying the original protocol.
You can configure datasette-redirect-to-https to respect this header using the following plugin configuration in metadata.json:
The above example will redirect to https:// if the incoming request has a x-forwarded-proto: http request header.
If multiple if_headers are listed, the redirect will occur if any of them match.
Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-redirect-to-https
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,"{""id"": 400878073, ""node_id"": ""MDEwOlJlcG9zaXRvcnk0MDA4NzgwNzM="", ""name"": ""datasette-plugin-template-repository"", ""full_name"": ""simonw/datasette-plugin-template-repository"", ""private"": false, ""owner"": {""login"": ""simonw"", ""id"": 9599, ""node_id"": ""MDQ6VXNlcjk1OTk="", ""avatar_url"": ""https://avatars.githubusercontent.com/u/9599?v=4"", ""gravatar_id"": """", ""url"": ""https://api.github.com/users/simonw"", ""html_url"": ""https://github.com/simonw"", ""followers_url"": ""https://api.github.com/users/simonw/followers"", ""following_url"": ""https://api.github.com/users/simonw/following{/other_user}"", ""gists_url"": ""https://api.github.com/users/simonw/gists{/gist_id}"", ""starred_url"": ""https://api.github.com/users/simonw/starred{/owner}{/repo}"", ""subscriptions_url"": ""https://api.github.com/users/simonw/subscriptions"", ""organizations_url"": ""https://api.github.com/users/simonw/orgs"", ""repos_url"": ""https://api.github.com/users/simonw/repos"", ""events_url"": ""https://api.github.com/users/simonw/events{/privacy}"", ""received_events_url"": ""https://api.github.com/users/simonw/received_events"", ""type"": ""User"", ""site_admin"": false}, ""html_url"": ""https://github.com/simonw/datasette-plugin-template-repository"", ""description"": ""GitHub template repository for creating new Datasette plugins, using the simonw/datasette-plugin cookiecutter template"", ""fork"": false, ""url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository"", ""forks_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/forks"", ""keys_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/keys{/key_id}"", ""collaborators_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/collaborators{/collaborator}"", ""teams_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/teams"", ""hooks_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/hooks"", ""issue_events_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/issues/events{/number}"", ""events_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/events"", ""assignees_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/assignees{/user}"", ""branches_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/branches{/branch}"", ""tags_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/tags"", ""blobs_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/blobs{/sha}"", ""git_tags_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/tags{/sha}"", ""git_refs_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/refs{/sha}"", ""trees_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/trees{/sha}"", ""statuses_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/statuses/{sha}"", ""languages_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/languages"", ""stargazers_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/stargazers"", ""contributors_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/contributors"", ""subscribers_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/subscribers"", ""subscription_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/subscription"", ""commits_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/commits{/sha}"", ""git_commits_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/commits{/sha}"", ""comments_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/comments{/number}"", ""issue_comment_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/issues/comments{/number}"", ""contents_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/contents/{+path}"", ""compare_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/compare/{base}...{head}"", ""merges_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/merges"", ""archive_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/{archive_format}{/ref}"", ""downloads_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/downloads"", ""issues_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/issues{/number}"", ""pulls_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/pulls{/number}"", ""milestones_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/milestones{/number}"", ""notifications_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/notifications{?since,all,participating}"", ""labels_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/labels{/name}"", ""releases_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/releases{/id}"", ""deployments_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/deployments"", ""created_at"": ""2021-08-28T19:50:28Z"", ""updated_at"": ""2022-06-10T13:28:46Z"", ""pushed_at"": ""2022-03-16T23:42:16Z"", ""git_url"": ""git://github.com/simonw/datasette-plugin-template-repository.git"", ""ssh_url"": ""git@github.com:simonw/datasette-plugin-template-repository.git"", ""clone_url"": ""https://github.com/simonw/datasette-plugin-template-repository.git"", ""svn_url"": ""https://github.com/simonw/datasette-plugin-template-repository"", ""homepage"": """", ""size"": 9, ""stargazers_count"": 15, ""watchers_count"": 15, ""language"": null, ""has_issues"": true, ""has_projects"": true, ""has_downloads"": true, ""has_wiki"": true, ""has_pages"": false, ""forks_count"": 0, ""mirror_url"": null, ""archived"": false, ""disabled"": false, ""open_issues_count"": 0, ""license"": null, ""allow_forking"": true, ""is_template"": true, ""web_commit_signoff_required"": false, ""topics"": [], ""visibility"": ""public"", ""forks"": 0, ""open_issues"": 0, ""watchers"": 15, ""default_branch"": ""main"", ""permissions"": {""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}, ""temp_clone_token"": """"}",0,
434308974,R_kgDOGeMHbg,datasette-hovercards,simonw/datasette-hovercards,0,9599,https://github.com/simonw/datasette-hovercards,Add preview hovercards to links in Datasette,0,2021-12-02T17:11:59Z,2022-02-08T07:22:21Z,2021-12-02T19:57:32Z,,8,2,2,JavaScript,1,1,1,1,0,0,0,0,1,,[],0,1,2,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-hovercards
[](https://pypi.org/project/datasette-hovercards/)
[](https://github.com/simonw/datasette-hovercards/releases)
[](https://github.com/simonw/datasette-hovercards/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-hovercards/blob/main/LICENSE)
Add preview hovercards to links in Datasette
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-hovercards
## Usage
Once installed, hovering over a link to a row within the Datasette interface - for example a foreign key reference on the table page - should show a hovercard with a preview of that row.
For a live demo, hover over values in the `user`, `milestone` or `repo` columns on this table page:
https://latest-with-plugins.datasette.io/github/issues
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-hovercards
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-hovercards
Add preview hovercards to links in Datasette
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-hovercards
Usage
Once installed, hovering over a link to a row within the Datasette interface - for example a foreign key reference on the table page - should show a hovercard with a preview of that row.
For a live demo, hover over values in the user, milestone or repo columns on this table page:
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-hovercards
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
438003374,R_kgDOGhtmrg,datasette-pretty-traces,simonw/datasette-pretty-traces,0,9599,https://github.com/simonw/datasette-pretty-traces,Prettier formatting for ?_trace=1 traces,0,2021-12-13T19:43:28Z,2021-12-19T20:40:10Z,2022-01-14T02:08:51Z,,22,2,2,JavaScript,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,0,2,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-pretty-traces
[](https://pypi.org/project/datasette-pretty-traces/)
[](https://github.com/simonw/datasette-pretty-traces/releases)
[](https://github.com/simonw/datasette-pretty-traces/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-pretty-traces/blob/main/LICENSE)
Prettier formatting for `?_trace=1` traces
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-pretty-traces
## Usage
Once installed, run Datasette using `--setting trace_debug 1`:
datasette fixtures.db --setting trace_debug 1
Then navigate to any page and add `?_trace=` to the URL:
http://localhost:8001/?_trace=1
The plugin will scroll you down the page to the visualized trace information.
## Demo
You can try out the demo here:
- [/?_trace=1](https://latest-with-plugins.datasette.io/?_trace=1) tracing the homepage
- [/github/commits?_trace=1](https://latest-with-plugins.datasette.io/github/commits?_trace=1) tracing a table page
## Screenshot

## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-pretty-traces
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-pretty-traces
Prettier formatting for ?_trace=1 traces
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-pretty-traces
Usage
Once installed, run Datasette using --setting trace_debug 1:
datasette fixtures.db --setting trace_debug 1
Then navigate to any page and add ?_trace= to the URL:
http://localhost:8001/?_trace=1
The plugin will scroll you down the page to the visualized trace information.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-pretty-traces
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
441024802,R_kgDOGkmBIg,datasette-tiddlywiki,simonw/datasette-tiddlywiki,0,9599,https://github.com/simonw/datasette-tiddlywiki,Run TiddlyWiki in Datasette and save Tiddlers to a SQLite database,0,2021-12-23T01:05:56Z,2022-02-14T08:57:33Z,2022-03-08T01:36:10Z,,426,22,22,HTML,1,1,1,1,0,0,0,0,3,apache-2.0,[],0,3,22,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-tiddlywiki
[](https://pypi.org/project/datasette-tiddlywiki/)
[](https://github.com/simonw/datasette-tiddlywiki/releases)
[](https://github.com/simonw/datasette-tiddlywiki/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-tiddlywiki/blob/main/LICENSE)
Run [TiddlyWiki](https://tiddlywiki.com/) in Datasette and save Tiddlers to a SQLite database
Read more about this project [on my blog](https://simonwillison.net/2021/Dec/24/datasette-tiddlywiki/).
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-tiddlywiki
## Usage
Start Datasette with a `tiddlywiki.db` database. You can create it if it does not yet exist using `--create`.
You need to be signed in as the `root` user to write to the wiki, so use the `--root` option and click on the link it provides:
% datasette tiddlywiki.db --create --root
http://127.0.0.1:8001/-/auth-token?token=456670f1e8d01a8a33b71e17653130de17387336e29afcdfb4ab3d18261e6630
# ...
Navigate to `/-/tiddlywiki` on your instance to interact with TiddlyWiki.
## Authentication and permissions
By default, the wiki can be read by anyone who has permission to read the `tiddlywiki.db` database. Only the signed in `root` user can write to it.
You can sign in using the `--root` option described above, or you can set a password for that user using the [datasette-auth-passwords](https://datasette.io/plugins/datasette-auth-passwords) plugin and sign in using the `/-/login` page.
You can use the `edit-tiddlywiki` permission to grant edit permisions to other users, using another plugin such as [datasette-permissions-sql](https://datasette.io/plugins/datasette-permissions-sql).
You can use the `view-database` permission against the `tiddlywiki` database to control who can view the wiki.
Datasette's permissions mechanism is described in full in [the Datasette documentation](https://docs.datasette.io/en/stable/authentication.html).
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-tiddlywiki
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-tiddlywiki
Run TiddlyWiki in Datasette and save Tiddlers to a SQLite database
Navigate to /-/tiddlywiki on your instance to interact with TiddlyWiki.
Authentication and permissions
By default, the wiki can be read by anyone who has permission to read the tiddlywiki.db database. Only the signed in root user can write to it.
You can sign in using the --root option described above, or you can set a password for that user using the datasette-auth-passwords plugin and sign in using the /-/login page.
You can use the edit-tiddlywiki permission to grant edit permisions to other users, using another plugin such as datasette-permissions-sql.
You can use the view-database permission against the tiddlywiki database to control who can view the wiki.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-tiddlywiki
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
459821110,R_kgDOG2hQNg,google-drive-to-sqlite,simonw/google-drive-to-sqlite,0,9599,https://github.com/simonw/google-drive-to-sqlite,Create a SQLite database containing metadata from Google Drive,0,2022-02-16T02:16:29Z,2022-05-17T00:30:43Z,2022-05-21T16:56:11Z,https://datasette.io/tools/google-drive-to-sqlite,74,133,133,Python,1,1,1,1,0,11,0,0,9,apache-2.0,[],11,9,133,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,11,3,"# google-drive-to-sqlite
[](https://pypi.org/project/google-drive-to-sqlite/)
[](https://github.com/simonw/google-drive-to-sqlite/releases)
[](https://github.com/simonw/google-drive-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/google-drive-to-sqlite/blob/master/LICENSE)
Create a SQLite database containing metadata from [Google Drive](https://www.google.com/drive)
For background on this project, see [Google Drive to SQLite](https://simonwillison.net/2022/Feb/20/google-drive-to-sqlite/) on my blog.
If you use Google Drive, and especially if you have shared drives with other people there's a good chance you have hundreds or even thousands of files that you may not be fully aware of.
This tool can download metadata about those files - their names, sizes, folders, content types, permissions, creation dates and more - and store them in a SQLite database.
This lets you use SQL to analyze your Google Drive contents, using [Datasette](https://datasette.io/) or the SQLite command-line tool or any other SQLite database browsing software.
## Installation
Install this tool using `pip`:
pip install google-drive-to-sqlite
## Quickstart
Authenticate with Google Drive by running:
google-drive-to-sqlite auth
Now create a SQLite database with metadata about all of the files you have starred using:
google-drive-to-sqlite files starred.db --starred
You can explore the resulting database using [Datasette](https://datasette.io/):
$ pip install datasette
$ datasette starred.db
INFO: Started server process [24661]
INFO: Uvicorn running on http://127.0.0.1:8001
## Authentication
> :warning: **This application has not yet been verified by Google** - you may find you are unable to authenticate until that verification is complete. [#10](https://github.com/simonw/google-drive-to-sqlite/issues/10)
>
> You can work around this issue by [creating your own OAuth client ID key](https://til.simonwillison.net/googlecloud/google-oauth-cli-application) and passing it to the `auth` command using `--google-client-id` and `--google-client-secret`.
First, authenticate with Google Drive using the `auth` command:
$ google-drive-to-sqlite auth
Visit the following URL to authenticate with Google Drive
https://accounts.google.com/o/oauth2/v2/auth?...
Then return here and paste in the resulting code:
Paste code here:
Follow the link, sign in with Google Drive and then copy and paste the resulting code back into the tool.
This will save an authentication token to the file called `auth.json` in the current directory.
To specify a different location for that file, use the `--auth` option:
google-drive-to-sqlite auth --auth ~/google-drive-auth.json
The `auth` command also provides options for using a different scope, Google client ID and Google client secret. You can use these to create your own custom authentication tokens that can work with other Google APIs, see [issue #5](https://github.com/simonw/google-drive-to-sqlite/issues/5) for details.
Full `--help`:
```
Usage: google-drive-to-sqlite auth [OPTIONS]
Authenticate user and save credentials
Options:
-a, --auth FILE Path to save token, defaults to auth.json
--google-client-id TEXT Custom Google client ID
--google-client-secret TEXT Custom Google client secret
--scope TEXT Custom token scope
--help Show this message and exit.
```
To revoke the token that is stored in `auth.json`, such that it cannot be used to access Google Drive in the future, run the `revoke` command:
google-drive-to-sqlite revoke
Or if your token is stored in another location:
google-drive-to-sqlite revoke -a ~/google-drive-auth.json
You will need to obtain a fresh token using the `auth` command in order to continue using this tool.
## google-drive-to-sqlite files
To retrieve metadata about the files in your Google Drive, or a folder or search within it, use the `google-drive-to-sqlite files` command.
This will default to writing details about every file in your Google Drive to a SQLite database:
google-drive-to-sqlite files files.db
Files and folders will be written to databases tables, which will be created if they do not yet exist. The database schema is [shown below](#database-schema).
If a file or folder already exists, based on a matching `id`, it will be replaced with fresh data.
Instead of writing to SQLite you can use `--json` to output as JSON, or `--nl` to output as newline-delimited JSON:
google-drive-to-sqlite files --nl
Use `--folder ID` to retrieve everything in a specified folder and its sub-folders:
google-drive-to-sqlite files files.db --folder 1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i
Use `--q QUERY` to use a [custom search query](https://developers.google.com/drive/api/v3/reference/query-ref):
google-drive-to-sqlite files files.db -q ""viewedByMeTime > '2022-01-01'""
The following shortcut options help build queries:
- `--full-text TEXT` to search for files where the full text matches a search term
- `--starred` for files and folders you have starred
- `--trashed` for files and folders in the trash
- `--shared-with-me` for files and folders that have been shared with you
- `--apps` for Google Apps documents, spreadsheets, presentations and drawings (equivalent to setting all of the next four options)
- `--docs` for Google Apps documents
- `--sheets` for Google Apps spreadsheets
- `--presentations` for Google Apps presentations
- `--drawings` for Google Apps drawings
You can combine these - for example, this returns all files that you have starred and that were shared with you:
google-drive-to-sqlite files highlights.db \
--starred --shared-with-me
Multiple options are treated as AND, with the exception of the Google Apps options which are treated as OR - so the following would retrieve all spreadsheets and presentations that have also been starred:
google-drive-to-sqlite files highlights.db \
--starred --sheets --presentations
You can use `--stop-after X` to stop after retrieving X files, useful for trying out a new search pattern and seeing results straight away.
The `--import-json` and `--import-nl` options are mainly useful for testing and developing this tool. They allow you to replay the JSON or newline-delimited JSON that was previously fetched using `--json` or `--nl` and use it to create a fresh SQLite database, without needing to make any outbound API calls:
# Fetch all starred files from the API, write to starred.json
google-drive-to-sqlite files -q 'starred = true' --json > starred.json
# Now import that data into a new SQLite database file
google-drive-to-sqlite files starred.db --import-json starred.json
Full `--help`:
```
Usage: google-drive-to-sqlite files [OPTIONS] [DATABASE]
Retrieve metadata for files in Google Drive, and write to a SQLite database or
output as JSON.
google-drive-to-sqlite files files.db
Use --json to output JSON, --nl for newline-delimited JSON:
google-drive-to-sqlite files files.db --json
Use a folder ID to recursively fetch every file in that folder and its sub-
folders:
google-drive-to-sqlite files files.db --folder
1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i
Fetch files you have starred:
google-drive-to-sqlite files starred.db --starred
Options:
-a, --auth FILE Path to auth.json token file
--folder TEXT Files in this folder ID and its sub-folders
-q TEXT Files matching this query
--full-text TEXT Search for files with text match
--starred Files you have starred
--trashed Files in the trash
--shared-with-me Files that have been shared with you
--apps Google Apps docs, spreadsheets, presentations and
drawings
--docs Google Apps docs
--sheets Google Apps spreadsheets
--presentations Google Apps presentations
--drawings Google Apps drawings
--json Output JSON rather than write to DB
--nl Output newline-delimited JSON rather than write to DB
--stop-after INTEGER Stop paginating after X results
--import-json FILE Import from this JSON file instead of the API
--import-nl FILE Import from this newline-delimited JSON file
-v, --verbose Send verbose output to stderr
--help Show this message and exit.
```
## google-drive-to-sqlite download FILE_ID
The `download` command can be used to download files from Google Drive.
You'll need one or more file IDs, which look something like `0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB`.
To download the file, run this:
google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB
This will detect the content type of the file and use that as the extension - so if this file is a JPEG the file would be downloaded as:
0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB.jpeg
You can pass multiple file IDs to the command at once.
To hide the progress bar and filename output, use `-s` or `--silent`.
If you are downloading a single file you can use the `-o` output to specify a filename and location:
google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB \
-o my-image.jpeg
Use `-o -` to write the file contents to standard output:
google-drive-to-sqlite download 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB \
-o - > my-image.jpeg
Full `--help`:
```
Usage: google-drive-to-sqlite download [OPTIONS] FILE_IDS...
Download one or more files to disk, based on their file IDs.
The file content will be saved to a file with the name:
FILE_ID.ext
Where the extension is automatically picked based on the type of file.
If you are downloading a single file you can specify a filename with -o:
google-drive-to-sqlite download MY_FILE_ID -o myfile.txt
Options:
-a, --auth FILE Path to auth.json token file
-o, --output FILE File to write to, or - for standard output
-s, --silent Hide progress bar and filename
--help Show this message and exit.
```
## google-drive-to-sqlite export FORMAT FILE_ID
The `export` command can be used to export Google Docs documents, spreadsheets and presentations in a number of different formats.
You'll need one or more document IDs, which look something like `10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU`. You can find these by looking at the URL of your document on the Google Docs site.
To export that document as PDF, run this:
google-drive-to-sqlite export pdf 10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU
The file will be exported as:
10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU-export.pdf
You can pass multiple file IDs to the command at once.
For the `FORMAT` option you can use any of the mime type options listed [on this page](https://developers.google.com/drive/api/v3/ref-export-formats) - for example, to export as an Open Office document you could use:
google-drive-to-sqlite export \
application/vnd.oasis.opendocument.text \
10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU
For convenience the following shortcuts for common file formats are provided:
- Google Docs: `html`, `txt`, `rtf`, `pdf`, `doc`, `zip`, `epub`
- Google Sheets: `xls`, `pdf`, `csv`, `tsv`, `zip`
- Presentations: `ppt`, `pdf`, `txt`
- Drawings: `jpeg`, `png`, `svg`
The `zip` option returns a zip file of HTML. `txt` returns plain text. The others should be self-evident.
To hide the filename output, use `-s` or `--silent`.
If you are exporting a single file you can use the `-o` output to specify a filename and location:
google-drive-to-sqlite export pdf 10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU \
-o my-document.pdf
Use `-o -` to write the file contents to standard output:
google-drive-to-sqlite export pdf 10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU \
-o - > my-document.pdf
Full `--help`:
```
Usage: google-drive-to-sqlite export [OPTIONS] FORMAT FILE_IDS...
Export one or more files to the specified format.
Usage:
google-drive-to-sqlite export pdf FILE_ID_1 FILE_ID_2
The file content will be saved to a file with the name:
FILE_ID-export.ext
Where the extension is based on the format you specified.
Available export formats can be seen here:
https://developers.google.com/drive/api/v3/ref-export-formats
Or you can use one of the following shortcuts:
- Google Docs: html, txt, rtf, pdf, doc, zip, epub
- Google Sheets: xls, pdf, csv, tsv, zip
- Presentations: ppt, pdf, txt
- Drawings: jpeg, png, svg
""zip"" returns a zip file of HTML.
If you are exporting a single file you can specify a filename with -o:
google-drive-to-sqlite export zip MY_FILE_ID -o myfile.zip
Options:
-a, --auth FILE Path to auth.json token file
-o, --output FILE File to write to, or - for standard output
-s, --silent Hide progress bar and filename
--help Show this message and exit.
```
## google-drive-to-sqlite get URL
The `get` command makes authenticated requests to the specified URL, using credentials derived from the `auth.json` file.
For example:
$ google-drive-to-sqlite get 'https://www.googleapis.com/drive/v3/about?fields=*'
{
""kind"": ""drive#about"",
""user"": {
""kind"": ""drive#user"",
""displayName"": ""Simon Willison"",
# ...
If the resource you are fetching supports pagination you can use `--paginate key` to paginate through all of the rows in a specified key. For example, the following API has a `nextPageToken` key and a `files` list, suggesting it supports pagination:
$ google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files
{
""kind"": ""drive#fileList"",
""nextPageToken"": ""~!!~AI9...wogHHYlc="",
""incompleteSearch"": false,
""files"": [
{
""kind"": ""drive#file"",
""id"": ""1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg"",
""name"": ""Title of a spreadsheet"",
""mimeType"": ""application/vnd.google-apps.spreadsheet""
},
To paginate through everything in the `files` list you would use `--paginate files` like this:
$ google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files
[
{
""kind"": ""drive#file"",
""id"": ""1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg"",
""name"": ""Title of a spreadsheet"",
""mimeType"": ""application/vnd.google-apps.spreadsheet""
},
# ...
Add `--nl` to stream paginated data as newline-delimited JSON:
$ google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files --nl
{""kind"": ""drive#file"", ""id"": ""1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg"", ""name"": ""Title of a spreadsheet"", ""mimeType"": ""application/vnd.google-apps.spreadsheet""}
{""kind"": ""drive#file"", ""id"": ""1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i"", ""name"": ""Subfolder"", ""mimeType"": ""application/vnd.google-apps.folder""}
Add `--stop-after 5` to stop after 5 records - useful for testing.
Full `--help`:
```
Usage: google-drive-to-sqlite get [OPTIONS] URL
Make an authenticated HTTP GET to the specified URL
Options:
-a, --auth FILE Path to auth.json token file
--paginate TEXT Paginate through all results in this key
--nl Output paginated data as newline-delimited JSON
--stop-after INTEGER Stop paginating after X results
-v, --verbose Send verbose output to stderr
--help Show this message and exit.
```
## Database schema
The database created by this tool has the following schema:
```sql
CREATE TABLE [drive_users] (
[permissionId] TEXT PRIMARY KEY,
[kind] TEXT,
[displayName] TEXT,
[photoLink] TEXT,
[me] INTEGER,
[emailAddress] TEXT
);
CREATE TABLE [drive_folders] (
[id] TEXT PRIMARY KEY,
[_parent] TEXT,
[_owner] TEXT,
[lastModifyingUser] TEXT,
[kind] TEXT,
[name] TEXT,
[mimeType] TEXT,
[starred] INTEGER,
[trashed] INTEGER,
[explicitlyTrashed] INTEGER,
[parents] TEXT,
[spaces] TEXT,
[version] TEXT,
[webViewLink] TEXT,
[iconLink] TEXT,
[hasThumbnail] INTEGER,
[thumbnailVersion] TEXT,
[viewedByMe] INTEGER,
[createdTime] TEXT,
[modifiedTime] TEXT,
[modifiedByMe] INTEGER,
[shared] INTEGER,
[ownedByMe] INTEGER,
[viewersCanCopyContent] INTEGER,
[copyRequiresWriterPermission] INTEGER,
[writersCanShare] INTEGER,
[folderColorRgb] TEXT,
[quotaBytesUsed] TEXT,
[isAppAuthorized] INTEGER,
[linkShareMetadata] TEXT,
FOREIGN KEY([_parent]) REFERENCES [drive_folders]([id]),
FOREIGN KEY([_owner]) REFERENCES [drive_users]([permissionId]),
FOREIGN KEY([lastModifyingUser]) REFERENCES [drive_users]([permissionId])
);
CREATE TABLE [drive_files] (
[id] TEXT PRIMARY KEY,
[_parent] TEXT,
[_owner] TEXT,
[lastModifyingUser] TEXT,
[kind] TEXT,
[name] TEXT,
[mimeType] TEXT,
[starred] INTEGER,
[trashed] INTEGER,
[explicitlyTrashed] INTEGER,
[parents] TEXT,
[spaces] TEXT,
[version] TEXT,
[webViewLink] TEXT,
[iconLink] TEXT,
[hasThumbnail] INTEGER,
[thumbnailVersion] TEXT,
[viewedByMe] INTEGER,
[createdTime] TEXT,
[modifiedTime] TEXT,
[modifiedByMe] INTEGER,
[shared] INTEGER,
[ownedByMe] INTEGER,
[viewersCanCopyContent] INTEGER,
[copyRequiresWriterPermission] INTEGER,
[writersCanShare] INTEGER,
[quotaBytesUsed] TEXT,
[isAppAuthorized] INTEGER,
[linkShareMetadata] TEXT,
FOREIGN KEY([_parent]) REFERENCES [drive_folders]([id]),
FOREIGN KEY([_owner]) REFERENCES [drive_users]([permissionId]),
FOREIGN KEY([lastModifyingUser]) REFERENCES [drive_users]([permissionId])
);
```
## Thumbnails
You can construct a thumbnail image for a known file ID using the following URL:
https://drive.google.com/thumbnail?sz=w800-h800&id=FILE_ID
Users who are signed into Google Drive and have permission to view a file will be redirected to a thumbnail version of that file. You can tweak the `w800` and `h800` parameters to request different thumbnail sizes.
## Privacy policy
This tool requests access to your Google Drive account in order to retrieve metadata about your files there. It also offers a feature that can download the content of those files.
The credentials used to access your account are stored in the `auth.json` file on your computer. The metadata and content retrieved from Google Drive is also stored only on your own personal computer.
At no point do the developers of this tool gain access to any of your data.
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd google-drive-to-sqlite
python -m venv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
google-drive-to-sqlite
Create a SQLite database containing metadata from Google Drive
If you use Google Drive, and especially if you have shared drives with other people there's a good chance you have hundreds or even thousands of files that you may not be fully aware of.
This tool can download metadata about those files - their names, sizes, folders, content types, permissions, creation dates and more - and store them in a SQLite database.
This lets you use SQL to analyze your Google Drive contents, using Datasette or the SQLite command-line tool or any other SQLite database browsing software.
Installation
Install this tool using pip:
pip install google-drive-to-sqlite
Quickstart
Authenticate with Google Drive by running:
google-drive-to-sqlite auth
Now create a SQLite database with metadata about all of the files you have starred using:
google-drive-to-sqlite files starred.db --starred
You can explore the resulting database using Datasette:
$ pip install datasette
$ datasette starred.db
INFO: Started server process [24661]
INFO: Uvicorn running on http://127.0.0.1:8001
Authentication
⚠️This application has not yet been verified by Google - you may find you are unable to authenticate until that verification is complete. #10
You can work around this issue by creating your own OAuth client ID key and passing it to the auth command using --google-client-id and --google-client-secret.
First, authenticate with Google Drive using the auth command:
$ google-drive-to-sqlite auth
Visit the following URL to authenticate with Google Drive
https://accounts.google.com/o/oauth2/v2/auth?...
Then return here and paste in the resulting code:
Paste code here:
Follow the link, sign in with Google Drive and then copy and paste the resulting code back into the tool.
This will save an authentication token to the file called auth.json in the current directory.
To specify a different location for that file, use the --auth option:
The auth command also provides options for using a different scope, Google client ID and Google client secret. You can use these to create your own custom authentication tokens that can work with other Google APIs, see issue #5 for details.
Full --help:
Usage: google-drive-to-sqlite auth [OPTIONS]
Authenticate user and save credentials
Options:
-a, --auth FILE Path to save token, defaults to auth.json
--google-client-id TEXT Custom Google client ID
--google-client-secret TEXT Custom Google client secret
--scope TEXT Custom token scope
--help Show this message and exit.
To revoke the token that is stored in auth.json, such that it cannot be used to access Google Drive in the future, run the revoke command:
google-drive-to-sqlite revoke
Or if your token is stored in another location:
google-drive-to-sqlite revoke -a ~/google-drive-auth.json
You will need to obtain a fresh token using the auth command in order to continue using this tool.
google-drive-to-sqlite files
To retrieve metadata about the files in your Google Drive, or a folder or search within it, use the google-drive-to-sqlite files command.
This will default to writing details about every file in your Google Drive to a SQLite database:
google-drive-to-sqlite files files.db
Files and folders will be written to databases tables, which will be created if they do not yet exist. The database schema is shown below.
If a file or folder already exists, based on a matching id, it will be replaced with fresh data.
Instead of writing to SQLite you can use --json to output as JSON, or --nl to output as newline-delimited JSON:
google-drive-to-sqlite files --nl
Use --folder ID to retrieve everything in a specified folder and its sub-folders:
Multiple options are treated as AND, with the exception of the Google Apps options which are treated as OR - so the following would retrieve all spreadsheets and presentations that have also been starred:
You can use --stop-after X to stop after retrieving X files, useful for trying out a new search pattern and seeing results straight away.
The --import-json and --import-nl options are mainly useful for testing and developing this tool. They allow you to replay the JSON or newline-delimited JSON that was previously fetched using --json or --nl and use it to create a fresh SQLite database, without needing to make any outbound API calls:
# Fetch all starred files from the API, write to starred.json
google-drive-to-sqlite files -q 'starred = true' --json > starred.json
# Now import that data into a new SQLite database file
google-drive-to-sqlite files starred.db --import-json starred.json
Full --help:
Usage: google-drive-to-sqlite files [OPTIONS] [DATABASE]
Retrieve metadata for files in Google Drive, and write to a SQLite database or
output as JSON.
google-drive-to-sqlite files files.db
Use --json to output JSON, --nl for newline-delimited JSON:
google-drive-to-sqlite files files.db --json
Use a folder ID to recursively fetch every file in that folder and its sub-
folders:
google-drive-to-sqlite files files.db --folder
1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i
Fetch files you have starred:
google-drive-to-sqlite files starred.db --starred
Options:
-a, --auth FILE Path to auth.json token file
--folder TEXT Files in this folder ID and its sub-folders
-q TEXT Files matching this query
--full-text TEXT Search for files with text match
--starred Files you have starred
--trashed Files in the trash
--shared-with-me Files that have been shared with you
--apps Google Apps docs, spreadsheets, presentations and
drawings
--docs Google Apps docs
--sheets Google Apps spreadsheets
--presentations Google Apps presentations
--drawings Google Apps drawings
--json Output JSON rather than write to DB
--nl Output newline-delimited JSON rather than write to DB
--stop-after INTEGER Stop paginating after X results
--import-json FILE Import from this JSON file instead of the API
--import-nl FILE Import from this newline-delimited JSON file
-v, --verbose Send verbose output to stderr
--help Show this message and exit.
google-drive-to-sqlite download FILE_ID
The download command can be used to download files from Google Drive.
You'll need one or more file IDs, which look something like 0B32uDVNZfiEKLUtIT1gzYWN2NDI4SzVQYTFWWWxCWUtvVGNB.
Usage: google-drive-to-sqlite download [OPTIONS] FILE_IDS...
Download one or more files to disk, based on their file IDs.
The file content will be saved to a file with the name:
FILE_ID.ext
Where the extension is automatically picked based on the type of file.
If you are downloading a single file you can specify a filename with -o:
google-drive-to-sqlite download MY_FILE_ID -o myfile.txt
Options:
-a, --auth FILE Path to auth.json token file
-o, --output FILE File to write to, or - for standard output
-s, --silent Hide progress bar and filename
--help Show this message and exit.
google-drive-to-sqlite export FORMAT FILE_ID
The export command can be used to export Google Docs documents, spreadsheets and presentations in a number of different formats.
You'll need one or more document IDs, which look something like 10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU. You can find these by looking at the URL of your document on the Google Docs site.
To export that document as PDF, run this:
google-drive-to-sqlite export pdf 10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU
For convenience the following shortcuts for common file formats are provided:
Google Docs: html, txt, rtf, pdf, doc, zip, epub
Google Sheets: xls, pdf, csv, tsv, zip
Presentations: ppt, pdf, txt
Drawings: jpeg, png, svg
The zip option returns a zip file of HTML. txt returns plain text. The others should be self-evident.
To hide the filename output, use -s or --silent.
If you are exporting a single file you can use the -o output to specify a filename and location:
google-drive-to-sqlite export pdf 10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU \
-o my-document.pdf
Use -o - to write the file contents to standard output:
google-drive-to-sqlite export pdf 10BOHGDUYa7lBjUSo26YFCHTpgEmtXabdVFaopCTh1vU \
-o - > my-document.pdf
Full --help:
Usage: google-drive-to-sqlite export [OPTIONS] FORMAT FILE_IDS...
Export one or more files to the specified format.
Usage:
google-drive-to-sqlite export pdf FILE_ID_1 FILE_ID_2
The file content will be saved to a file with the name:
FILE_ID-export.ext
Where the extension is based on the format you specified.
Available export formats can be seen here:
https://developers.google.com/drive/api/v3/ref-export-formats
Or you can use one of the following shortcuts:
- Google Docs: html, txt, rtf, pdf, doc, zip, epub
- Google Sheets: xls, pdf, csv, tsv, zip
- Presentations: ppt, pdf, txt
- Drawings: jpeg, png, svg
""zip"" returns a zip file of HTML.
If you are exporting a single file you can specify a filename with -o:
google-drive-to-sqlite export zip MY_FILE_ID -o myfile.zip
Options:
-a, --auth FILE Path to auth.json token file
-o, --output FILE File to write to, or - for standard output
-s, --silent Hide progress bar and filename
--help Show this message and exit.
google-drive-to-sqlite get URL
The get command makes authenticated requests to the specified URL, using credentials derived from the auth.json file.
If the resource you are fetching supports pagination you can use --paginate key to paginate through all of the rows in a specified key. For example, the following API has a nextPageToken key and a files list, suggesting it supports pagination:
$ google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files
{
""kind"": ""drive#fileList"",
""nextPageToken"": ""~!!~AI9...wogHHYlc="",
""incompleteSearch"": false,
""files"": [
{
""kind"": ""drive#file"",
""id"": ""1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg"",
""name"": ""Title of a spreadsheet"",
""mimeType"": ""application/vnd.google-apps.spreadsheet""
},
To paginate through everything in the files list you would use --paginate files like this:
$ google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files
[
{
""kind"": ""drive#file"",
""id"": ""1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg"",
""name"": ""Title of a spreadsheet"",
""mimeType"": ""application/vnd.google-apps.spreadsheet""
},
# ...
Add --nl to stream paginated data as newline-delimited JSON:
$ google-drive-to-sqlite get https://www.googleapis.com/drive/v3/files --paginate files --nl
{""kind"": ""drive#file"", ""id"": ""1YEsITp_X8PtDUJWHGM0osT-TXAU1nr0e7RSWRM2Jpyg"", ""name"": ""Title of a spreadsheet"", ""mimeType"": ""application/vnd.google-apps.spreadsheet""}
{""kind"": ""drive#file"", ""id"": ""1E6Zg2X2bjjtPzVfX8YqdXZDCoB3AVA7i"", ""name"": ""Subfolder"", ""mimeType"": ""application/vnd.google-apps.folder""}
Add --stop-after 5 to stop after 5 records - useful for testing.
Full --help:
Usage: google-drive-to-sqlite get [OPTIONS] URL
Make an authenticated HTTP GET to the specified URL
Options:
-a, --auth FILE Path to auth.json token file
--paginate TEXT Paginate through all results in this key
--nl Output paginated data as newline-delimited JSON
--stop-after INTEGER Stop paginating after X results
-v, --verbose Send verbose output to stderr
--help Show this message and exit.
Database schema
The database created by this tool has the following schema:
Users who are signed into Google Drive and have permission to view a file will be redirected to a thumbnail version of that file. You can tweak the w800 and h800 parameters to request different thumbnail sizes.
Privacy policy
This tool requests access to your Google Drive account in order to retrieve metadata about your files there. It also offers a feature that can download the content of those files.
The credentials used to access your account are stored in the auth.json file on your computer. The metadata and content retrieved from Google Drive is also stored only on your own personal computer.
At no point do the developers of this tool gain access to any of your data.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd google-drive-to-sqlite
python -m venv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
",1,public,0,,,
462903750,R_kgDOG5dZxg,datasette-redirect-forbidden,simonw/datasette-redirect-forbidden,0,9599,https://github.com/simonw/datasette-redirect-forbidden,Redirect forbidden requests to a login page,0,2022-02-23T20:59:26Z,2022-02-23T22:00:12Z,2022-02-23T22:02:38Z,,7,0,0,Python,1,1,1,1,0,0,0,0,1,apache-2.0,[],0,1,0,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-redirect-forbidden
[](https://pypi.org/project/datasette-redirect-forbidden/)
[](https://github.com/simonw/datasette-redirect-forbidden/releases)
[](https://github.com/simonw/datasette-redirect-forbidden/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-redirect-forbidden/blob/main/LICENSE)
Redirect forbidden requests to a login page
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-redirect-forbidden
## Usage
Add the following to your `metadata.yml` (or `metadata.json`) file to configure the plugin:
```yaml
plugins:
datasette-redirect-forbidden:
redirect_to: /-/login
```
Any 403 forbidden pages will redirect to the specified page.
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-redirect-forbidden
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-redirect-forbidden
Redirect forbidden requests to a login page
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-redirect-forbidden
Usage
Add the following to your metadata.yml (or metadata.json) file to configure the plugin: