id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions 166159072,MDEwOlJlcG9zaXRvcnkxNjYxNTkwNzI=,db-to-sqlite,simonw/db-to-sqlite,0,9599,https://github.com/simonw/db-to-sqlite,CLI tool for exporting tables or queries from any SQL database to a SQLite file,0,2019-01-17T04:16:48Z,2021-06-11T22:52:12Z,2021-06-11T22:55:56Z,,77,226,226,Python,1,1,1,1,0,12,0,0,2,apache-2.0,"[""sqlalchemy"", ""sqlite"", ""datasette"", ""datasette-io"", ""datasette-tool""]",12,2,226,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,12,4,"# db-to-sqlite [![PyPI](https://img.shields.io/pypi/v/db-to-sqlite.svg)](https://pypi.python.org/pypi/db-to-sqlite) [![Changelog](https://img.shields.io/github/v/release/simonw/db-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/db-to-sqlite/releases) [![Tests](https://github.com/simonw/db-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/db-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/db-to-sqlite/blob/main/LICENSE) CLI tool for exporting tables or queries from any SQL database to a SQLite file. ## Installation Install from PyPI like so: pip install db-to-sqlite If you want to use it with MySQL, you can install the extra dependency like this: pip install 'db-to-sqlite[mysql]' Installing the `mysqlclient` library on OS X can be tricky - I've found [this recipe](https://gist.github.com/simonw/90ac0afd204cd0d6d9c3135c3888d116) to work (run that before installing `db-to-sqlite`). For PostgreSQL, use this: pip install 'db-to-sqlite[postgresql]' ## Usage Usage: db-to-sqlite [OPTIONS] CONNECTION PATH Load data from any database into SQLite. PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db CONNECTION is a SQLAlchemy connection string, for example: postgresql://localhost/my_database postgresql://username:passwd@localhost/my_database mysql://root@localhost/my_database mysql://username:passwd@localhost/my_database More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls Options: --version Show the version and exit. --all Detect and copy all tables --table TEXT Specific tables to copy --skip TEXT When using --all skip these tables --redact TEXT... (table, column) pairs to redact with *** --sql TEXT Optional SQL query to run --output TEXT Table in which to save --sql query results --pk TEXT Optional column to use as a primary key --index-fks / --no-index-fks Should foreign keys have indexes? Default on -p, --progress Show progress bar --postgres-schema TEXT PostgreSQL schema to use --help Show this message and exit. For example, to save the content of the `blog_entry` table from a PostgreSQL database to a local file called `blog.db` you could do this: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --table=blog_entry You can specify `--table` more than once. You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all When running `--all` you can specify tables to skip using `--skip`: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all \ --skip=django_migrations If you want to save the results of a custom SQL query, do this: db-to-sqlite ""postgresql://localhost/myblog"" output.db \ --output=query_results \ --sql=""select id, title, created from blog_entry"" \ --pk=id The `--output` option specifies the table that should contain the results of the query. ## Using db-to-sqlite with PostgreSQL schemas If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the `--postgres-schema` option: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all \ --postgres-schema my_schema ## Using db-to-sqlite with Heroku Postgres If you run an application on [Heroku](https://www.heroku.com/) using their [Postgres database product](https://www.heroku.com/postgres), you can use the `heroku config` command to access a compatible connection string: $ heroku config --app myappname | grep HEROKU_POSTG HEROKU_POSTGRESQL_OLIVE_URL: postgres://username:password@ec2-xxx-xxx-xxx-x.compute-1.amazonaws.com:5432/dbname You can pass this to `db-to-sqlite` to create a local SQLite database with the data from your Heroku instance. You can even do this using a bash one-liner: $ db-to-sqlite $(heroku config --app myappname | grep HEROKU_POSTG | cut -d: -f 2-) \ /tmp/heroku.db --all -p 1/23: django_migrations ... 17/23: blog_blogmark [####################################] 100% ... ## Related projects * [Datasette](https://github.com/simonw/datasette): A tool for exploring and publishing data. Works great with SQLite files generated using `db-to-sqlite`. * [sqlite-utils](https://github.com/simonw/sqlite-utils): Python CLI utility and library for manipulating SQLite databases. * [csvs-to-sqlite](https://github.com/simonw/csvs-to-sqlite): Convert CSV files into a SQLite database. ## Development To set up this tool locally, first checkout the code. Then create a new virtual environment: cd db-to-sqlite python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed. You can install those extra dependencies like so: pip install -e '.[test_mysql,test_postgresql]' You can alternative use `pip install psycopg2-binary` if you cannot install the `psycopg2` dependency used by the `test_postgresql` extra. See [Running a MySQL server using Homebrew](https://til.simonwillison.net/homebrew/mysql-homebrew) for tips on running the tests against MySQL on macOS, including how to install the `mysqlclient` dependency. The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers: - `MYSQL_TEST_DB_CONNECTION` - defaults to `mysql://root@localhost/test_db_to_sqlite` - `POSTGRESQL_TEST_DB_CONNECTION` - defaults to `postgresql://localhost/test_db_to_sqlite` The database you indicate in the environment variable - `test_db_to_sqlite` by default - will be deleted and recreated on every test run. ","

db-to-sqlite

CLI tool for exporting tables or queries from any SQL database to a SQLite file.

Installation

Install from PyPI like so:

pip install db-to-sqlite

If you want to use it with MySQL, you can install the extra dependency like this:

pip install 'db-to-sqlite[mysql]'

Installing the mysqlclient library on OS X can be tricky - I've found this recipe to work (run that before installing db-to-sqlite).

For PostgreSQL, use this:

pip install 'db-to-sqlite[postgresql]'

Usage

Usage: db-to-sqlite [OPTIONS] CONNECTION PATH

  Load data from any database into SQLite.

  PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db

  CONNECTION is a SQLAlchemy connection string, for example:

      postgresql://localhost/my_database
      postgresql://username:passwd@localhost/my_database

      mysql://root@localhost/my_database
      mysql://username:passwd@localhost/my_database

  More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls

Options:
  --version                     Show the version and exit.
  --all                         Detect and copy all tables
  --table TEXT                  Specific tables to copy
  --skip TEXT                   When using --all skip these tables
  --redact TEXT...              (table, column) pairs to redact with ***
  --sql TEXT                    Optional SQL query to run
  --output TEXT                 Table in which to save --sql query results
  --pk TEXT                     Optional column to use as a primary key
  --index-fks / --no-index-fks  Should foreign keys have indexes? Default on
  -p, --progress                Show progress bar
  --postgres-schema TEXT        PostgreSQL schema to use
  --help                        Show this message and exit.

For example, to save the content of the blog_entry table from a PostgreSQL database to a local file called blog.db you could do this:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --table=blog_entry

You can specify --table more than once.

You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --all

When running --all you can specify tables to skip using --skip:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --all \
    --skip=django_migrations

If you want to save the results of a custom SQL query, do this:

db-to-sqlite ""postgresql://localhost/myblog"" output.db \
    --output=query_results \
    --sql=""select id, title, created from blog_entry"" \
    --pk=id

The --output option specifies the table that should contain the results of the query.

Using db-to-sqlite with PostgreSQL schemas

If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the --postgres-schema option:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --all \
    --postgres-schema my_schema

Using db-to-sqlite with Heroku Postgres

If you run an application on Heroku using their Postgres database product, you can use the heroku config command to access a compatible connection string:

$ heroku config --app myappname | grep HEROKU_POSTG
HEROKU_POSTGRESQL_OLIVE_URL: postgres://username:password@ec2-xxx-xxx-xxx-x.compute-1.amazonaws.com:5432/dbname

You can pass this to db-to-sqlite to create a local SQLite database with the data from your Heroku instance.

You can even do this using a bash one-liner:

$ db-to-sqlite $(heroku config --app myappname | grep HEROKU_POSTG | cut -d: -f 2-) \
    /tmp/heroku.db --all -p
1/23: django_migrations
...
17/23: blog_blogmark
[####################################]  100%
...

Related projects

Development

To set up this tool locally, first checkout the code. Then create a new virtual environment:

cd db-to-sqlite
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed.

You can install those extra dependencies like so:

pip install -e '.[test_mysql,test_postgresql]'

You can alternative use pip install psycopg2-binary if you cannot install the psycopg2 dependency used by the test_postgresql extra.

See Running a MySQL server using Homebrew for tips on running the tests against MySQL on macOS, including how to install the mysqlclient dependency.

The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers:

The database you indicate in the environment variable - test_db_to_sqlite by default - will be deleted and recreated on every test run.

",,,,,, 174715153,MDEwOlJlcG9zaXRvcnkxNzQ3MTUxNTM=,datasette-jellyfish,simonw/datasette-jellyfish,0,9599,https://github.com/simonw/datasette-jellyfish,Datasette plugin adding SQL functions for fuzzy text matching powered by Jellyfish,0,2019-03-09T16:02:01Z,2021-02-06T02:33:49Z,2021-02-06T02:34:18Z,https://datasette.io/plugins/datasette-jellyfish,15,9,9,Python,1,1,1,1,0,2,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",2,0,9,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# datasette-jellyfish [![PyPI](https://img.shields.io/pypi/v/datasette-jellyfish.svg)](https://pypi.org/project/datasette-jellyfish/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-jellyfish?include_prereleases&label=changelog)](https://github.com/simonw/datasette-jellyfish/releases) [![Tests](https://github.com/simonw/datasette-jellyfish/workflows/Test/badge.svg)](https://github.com/simonw/datasette-jellyfish/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-jellyfish/blob/main/LICENSE) Datasette plugin that adds custom SQL functions for fuzzy string matching, built on top of the [Jellyfish](https://github.com/jamesturk/jellyfish) Python library by James Turk and Michael Stephens. Interactive demos: * [soundex, metaphone, nysiis, match_rating_codex comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++soundex%28%3As%29%2C+%0D%0A++++metaphone%28%3As%29%2C+%0D%0A++++nysiis%28%3As%29%2C+%0D%0A++++match_rating_codex%28%3As%29&s=demo). * [distance functions comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++damerau_levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++hamming_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_winkler_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++match_rating_comparison%28%3As1%2C+%3As2%29%3B&s1=barrack+obama&s2=barrack+h+obama) Examples: SELECT soundex(""hello""); -- Outputs H400 SELECT metaphone(""hello""); -- Outputs HL SELECT nysiis(""hello""); -- Outputs HAL SELECT match_rating_codex(""hello""); -- Outputs HLL SELECT porter_stem(""running""); -- Outputs run SELECT levenshtein_distance(""hello"", ""hello world""); -- Outputs 6 SELECT damerau_levenshtein_distance(""hello"", ""hello world""); -- Outputs 6 SELECT hamming_distance(""hello"", ""hello world""); -- Outputs 6 SELECT jaro_similarity(""hello"", ""hello world""); -- Outputs 0.8181818181818182 SELECT jaro_winkler_similarity(""hello"", ""hello world""); -- Outputs 0.890909090909091 SELECT match_rating_comparison(""hello"", ""helloo""); -- Outputs 1 See [the Jellyfish documentation](https://jellyfish.readthedocs.io/en/latest/) for an explanation of each of these functions.","

datasette-jellyfish

Datasette plugin that adds custom SQL functions for fuzzy string matching, built on top of the Jellyfish Python library by James Turk and Michael Stephens.

Interactive demos:

Examples:

SELECT soundex(""hello"");
    -- Outputs H400
SELECT metaphone(""hello"");
    -- Outputs HL
SELECT nysiis(""hello"");
    -- Outputs HAL
SELECT match_rating_codex(""hello"");
    -- Outputs HLL
SELECT porter_stem(""running"");
    -- Outputs run
SELECT levenshtein_distance(""hello"", ""hello world"");
    -- Outputs 6
SELECT damerau_levenshtein_distance(""hello"", ""hello world"");
    -- Outputs 6
SELECT hamming_distance(""hello"", ""hello world"");
    -- Outputs 6
SELECT jaro_similarity(""hello"", ""hello world"");
    -- Outputs 0.8181818181818182
SELECT jaro_winkler_similarity(""hello"", ""hello world"");
    -- Outputs 0.890909090909091
SELECT match_rating_comparison(""hello"", ""helloo"");
    -- Outputs 1

See the Jellyfish documentation for an explanation of each of these functions.

",,,,,, 184168864,MDEwOlJlcG9zaXRvcnkxODQxNjg4NjQ=,datasette-render-html,simonw/datasette-render-html,0,9599,https://github.com/simonw/datasette-render-html,Plugin for selectively rendering the HTML is specific columns,0,2019-04-30T01:21:25Z,2020-09-24T04:44:47Z,2021-03-17T03:57:13Z,,8,2,2,Python,1,1,1,1,0,2,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",2,1,2,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# datasette-render-html [![PyPI](https://img.shields.io/pypi/v/datasette-render-html.svg)](https://pypi.org/project/datasette-render-html/) [![CircleCI](https://circleci.com/gh/simonw/datasette-render-html.svg?style=svg)](https://circleci.com/gh/simonw/datasette-render-html) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-render-html/blob/master/LICENSE) This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces. This means you can store HTML in those columns and have it rendered as such on those pages. If you have a database called `docs.db` containing a `glossary` table and you want the `definition` column in that table to be rendered as HTML, you would use a `metadata.json` file that looks like this: { ""databases"": { ""docs"": { ""tables"": { ""glossary"": { ""plugins"": { ""datasette-render-html"": { ""columns"": [""definition""] } } } } } } } ## Security This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an [XSS attack](https://owasp.org/www-community/attacks/xss/). It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML. As such, you should only use this plugin against specific columns in specific tables, as shown in the example above. ","

datasette-render-html

This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces.

This means you can store HTML in those columns and have it rendered as such on those pages.

If you have a database called docs.db containing a glossary table and you want the definition column in that table to be rendered as HTML, you would use a metadata.json file that looks like this:

{
    ""databases"": {
        ""docs"": {
            ""tables"": {
                ""glossary"": {
                    ""plugins"": {
                        ""datasette-render-html"": {
                            ""columns"": [""definition""]
                        }
                    }
                }
            }
        }
    }
}

Security

This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an XSS attack.

It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML.

As such, you should only use this plugin against specific columns in specific tables, as shown in the example above.

",,,,,, 189321671,MDEwOlJlcG9zaXRvcnkxODkzMjE2NzE=,datasette-jq,simonw/datasette-jq,0,9599,https://github.com/simonw/datasette-jq,Datasette plugin that adds a custom SQL function for executing jq expressions against JSON values,0,2019-05-30T01:06:31Z,2020-12-24T17:35:27Z,2020-04-09T05:43:43Z,,11,10,10,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""jq"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,10,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-jq [![PyPI](https://img.shields.io/pypi/v/datasette-jq.svg)](https://pypi.org/project/datasette-jq/) [![CircleCI](https://circleci.com/gh/simonw/datasette-jq.svg?style=svg)](https://circleci.com/gh/simonw/datasette-jq) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-jq/blob/master/LICENSE) Datasette plugin that adds custom SQL functions for executing [jq](https://stedolan.github.io/jq/) expressions against JSON values. Install this plugin in the same environment as Datasette to enable the `jq()` SQL function. Usage: select jq( column_with_json, ""{top_3: .classifiers[:3], v: .version}"" ) See [the jq manual](https://stedolan.github.io/jq/manual/#Basicfilters) for full details of supported expression syntax. ## Interactive demo You can try this plugin out at [datasette-jq-demo.datasette.io](https://datasette-jq-demo.datasette.io/) Sample query: select package, ""https://pypi.org/project/"" || package || ""/"" as url, jq(info, ""{summary: .info.summary, author: .info.author, versions: .releases|keys|reverse}"") from packages [Try this query out](https://datasette-jq-demo.datasette.io/demo?sql=select+package%2C+%22https%3A%2F%2Fpypi.org%2Fproject%2F%22+%7C%7C+package+%7C%7C+%22%2F%22+as+url%2C%0D%0Ajq%28info%2C+%22%7Bsummary%3A+.info.summary%2C+author%3A+.info.author%2C+versions%3A+.releases%7Ckeys%7Creverse%7D%22%29%0D%0Afrom+packages) in the interactive demo. ","

datasette-jq

Datasette plugin that adds custom SQL functions for executing jq expressions against JSON values.

Install this plugin in the same environment as Datasette to enable the jq() SQL function.

Usage:

select jq(
    column_with_json,
    ""{top_3: .classifiers[:3], v: .version}""
)

See the jq manual for full details of supported expression syntax.

Interactive demo

You can try this plugin out at datasette-jq-demo.datasette.io

Sample query:

select package, ""https://pypi.org/project/"" || package || ""/"" as url,
jq(info, ""{summary: .info.summary, author: .info.author, versions: .releases|keys|reverse}"")
from packages

Try this query out in the interactive demo.

",,,,,, 190950781,MDEwOlJlcG9zaXRvcnkxOTA5NTA3ODE=,datasette-bplist,simonw/datasette-bplist,0,9599,https://github.com/simonw/datasette-bplist,Datasette plugin for working with Apple's binary plist format,0,2019-06-09T01:15:01Z,2021-06-07T18:05:00Z,2019-06-09T01:17:19Z,,7,9,9,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""bplist"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,9,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,0,"# datasette-bplist [![PyPI](https://img.shields.io/pypi/v/datasette-bplist.svg)](https://pypi.org/project/datasette-bplist/) [![CircleCI](https://circleci.com/gh/simonw/datasette-bplist.svg?style=svg)](https://circleci.com/gh/simonw/datasette-bplist) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-bplist/blob/master/LICENSE) Datasette plugin for working with Apple's [binary plist](https://en.wikipedia.org/wiki/Property_list) format. This plugin adds two features: a display hook and a SQL function. The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI. The SQL function `bplist_to_json(value)` can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's `json_extract()` function or with the [datasette-jq](https://github.com/simonw/datasette-jq) plugin to further analyze that data as part of a SQL query. Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-bplist ## Trying it out If you use a Mac you already have plenty of SQLite databases that contain binary plist data. One example is the database that powers the Apple Photos app. This database tends to be locked, so you will need to create a copy of the database in order to run queries against it: cp ~/Pictures/Photos\ Library.photoslibrary/database/photos.db /tmp/photos.db The database also makes use of custom SQLite extensions which prevent it from opening in Datasette. You can work around this by exporting the data that you want to experiment with into a new SQLite file. I recommend trying this plugin against the `RKMaster_dataNote` table, which contains plist-encoded EXIF metadata about the photos you have taken. You can export that table into a fresh database like so: sqlite3 /tmp/photos.db "".dump RKMaster_dataNote"" | sqlite3 /tmp/exif.db Now run `datasette /tmp/exif.db` and you can start trying out the plugin. ## Using the bplist_to_json() SQL function Once you have the `exif.db` demo working, you can try the `bplist_to_json()` SQL function. Here's a query that shows the camera lenses you have used the most often to take photos: select json_extract( bplist_to_json(value), ""$.{Exif}.LensModel"" ) as lens, count(*) as n from RKMaster_dataNote group by lens order by n desc; If you have a large number of photos this query can take a long time to execute, so you may need to increase the SQL time limit enforced by Datasette like so: $ datasette /tmp/exif.db \ --config sql_time_limit_ms:10000 Here's another query, showing the time at which you took every photo in your library which is classified as as screenshot: select attachedToId, json_extract( bplist_to_json(value), ""$.{Exif}.DateTimeOriginal"" ) from RKMaster_dataNote where json_extract( bplist_to_json(value), ""$.{Exif}.UserComment"" ) = ""Screenshot"" And if you install the [datasette-cluster-map](https://github.com/simonw/datasette-cluster-map) plugin, this query will show you a map of your most recent 1000 photos: select *, json_extract( bplist_to_json(value), ""$.{GPS}.Latitude"" ) as latitude, -json_extract( bplist_to_json(value), ""$.{GPS}.Longitude"" ) as longitude, json_extract( bplist_to_json(value), ""$.{Exif}.DateTimeOriginal"" ) as datetime from RKMaster_dataNote where latitude is not null order by attachedToId desc ","

datasette-bplist

Datasette plugin for working with Apple's binary plist format.

This plugin adds two features: a display hook and a SQL function.

The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI.

The SQL function bplist_to_json(value) can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's json_extract() function or with the datasette-jq plugin to further analyze that data as part of a SQL query.

Install this plugin in the same environment as Datasette to enable this new functionality:

pip install datasette-bplist

Trying it out

If you use a Mac you already have plenty of SQLite databases that contain binary plist data.

One example is the database that powers the Apple Photos app.

This database tends to be locked, so you will need to create a copy of the database in order to run queries against it:

cp ~/Pictures/Photos\ Library.photoslibrary/database/photos.db /tmp/photos.db

The database also makes use of custom SQLite extensions which prevent it from opening in Datasette.

You can work around this by exporting the data that you want to experiment with into a new SQLite file.

I recommend trying this plugin against the RKMaster_dataNote table, which contains plist-encoded EXIF metadata about the photos you have taken.

You can export that table into a fresh database like so:

sqlite3 /tmp/photos.db "".dump RKMaster_dataNote"" | sqlite3 /tmp/exif.db

Now run datasette /tmp/exif.db and you can start trying out the plugin.

Using the bplist_to_json() SQL function

Once you have the exif.db demo working, you can try the bplist_to_json() SQL function.

Here's a query that shows the camera lenses you have used the most often to take photos:

select
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.LensModel""
    ) as lens,
    count(*) as n
from RKMaster_dataNote
group by lens
order by n desc;

If you have a large number of photos this query can take a long time to execute, so you may need to increase the SQL time limit enforced by Datasette like so:

$ datasette /tmp/exif.db \
    --config sql_time_limit_ms:10000

Here's another query, showing the time at which you took every photo in your library which is classified as as screenshot:

select
    attachedToId,
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.DateTimeOriginal""
    )
from RKMaster_dataNote
where
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.UserComment""
    ) = ""Screenshot""

And if you install the datasette-cluster-map plugin, this query will show you a map of your most recent 1000 photos:

select
    *, 
    json_extract(
        bplist_to_json(value),
        ""$.{GPS}.Latitude""
    ) as latitude,
    -json_extract(
        bplist_to_json(value),
        ""$.{GPS}.Longitude""
    ) as longitude,
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.DateTimeOriginal""
    ) as datetime
from
    RKMaster_dataNote
where
    latitude is not null
order by
    attachedToId desc
",,,,,, 191022928,MDEwOlJlcG9zaXRvcnkxOTEwMjI5Mjg=,datasette-render-binary,simonw/datasette-render-binary,0,9599,https://github.com/simonw/datasette-render-binary,Datasette plugin for rendering binary data,0,2019-06-09T15:25:52Z,2021-06-02T09:29:20Z,2019-06-13T16:14:31Z,,62,7,7,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,7,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-render-binary [![PyPI](https://img.shields.io/pypi/v/datasette-render-binary.svg)](https://pypi.org/project/datasette-render-binary/) [![CircleCI](https://circleci.com/gh/simonw/datasette-render-binary.svg?style=svg)](https://circleci.com/gh/simonw/datasette-render-binary) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-render-binary/blob/master/LICENSE) Datasette plugin for rendering binary data. Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-render-binary Binary data in cells will now be rendered as a mixture of characters and octets. ![Screenshot](https://raw.githubusercontent.com/simonw/datasette-render-binary/master/example.png) ","

datasette-render-binary

Datasette plugin for rendering binary data.

Install this plugin in the same environment as Datasette to enable this new functionality:

pip install datasette-render-binary

Binary data in cells will now be rendered as a mixture of characters and octets.

",,,,,, 195087137,MDEwOlJlcG9zaXRvcnkxOTUwODcxMzc=,datasette-auth-github,simonw/datasette-auth-github,0,9599,https://github.com/simonw/datasette-auth-github,Datasette plugin that authenticates users against GitHub,0,2019-07-03T16:02:53Z,2021-06-03T11:42:54Z,2021-02-25T06:40:17Z,https://datasette-auth-github-demo.datasette.io/,119,34,34,Python,1,1,1,1,0,4,0,0,3,apache-2.0,"[""asgi"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",4,3,34,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,4,1,"# datasette-auth-github [![PyPI](https://img.shields.io/pypi/v/datasette-auth-github.svg)](https://pypi.org/project/datasette-auth-github/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-auth-github?include_prereleases&label=changelog)](https://github.com/simonw/datasette-auth-github/releases) [![Tests](https://github.com/simonw/datasette-auth-github/workflows/Test/badge.svg)](https://github.com/simonw/datasette-auth-github/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-auth-github/blob/main/LICENSE) Datasette plugin that authenticates users against GitHub. - [Setup instructions](#setup-instructions) - [The authenticated actor](#the-authenticated-actor) - [Restricting access to specific users](#restricting-access-to-specific-users) - [Restricting access to specific GitHub organizations or teams](#restricting-access-to-specific-github-organizations-or-teams) - [What to do if a user is removed from an organization or team](#what-to-do-if-a-user-is-removed-from-an-organization-or-team) ## Setup instructions * Install the plugin: `datasette install datasette-auth-github` * Create a GitHub OAuth app: https://github.com/settings/applications/new * Set the Authorization callback URL to `http://127.0.0.1:8001/-/github-auth-callback` * Create a `metadata.json` file with the following structure: ```json { ""title"": ""datasette-auth-github demo"", ""plugins"": { ""datasette-auth-github"": { ""client_id"": {""$env"": ""GITHUB_CLIENT_ID""}, ""client_secret"": {""$env"": ""GITHUB_CLIENT_SECRET""} } } } ``` Now you can start Datasette like this, passing in the secrets as environment variables: $ GITHUB_CLIENT_ID=XXX GITHUB_CLIENT_SECRET=YYY datasette \ fixtures.db -m metadata.json Note that hard-coding secrets in `metadata.json` is a bad idea as they will be visible to anyone who can navigate to `/-/metadata`. Instead, we use Datasette's mechanism for [adding secret plugin configuration options](https://docs.datasette.io/en/stable/plugins.html#secret-configuration-values). By default anonymous users will still be able to interact with Datasette. If you wish all users to have to sign in with a GitHub account first, add this to your ``metadata.json``: ```json { ""allow"": { ""id"": ""*"" }, ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."" } } } ``` ## The authenticated actor Visit `/-/actor` when signed in to see the shape of the authenticated actor. It should look something like this: ```json { ""actor"": { ""display"": ""simonw"", ""gh_id"": ""9599"", ""gh_name"": ""Simon Willison"", ""gh_login"": ""simonw"", ""gh_email"": ""..."", ""gh_orgs"": [ ""dogsheep"", ""datasette-project"" ], ""gh_teams"": [ ""dogsheep/test"" ] } } ``` The `gh_orgs` and `gh_teams` properties will only be present if you used `load_teams` or `load_orgs`, documented below. ## Restricting access to specific users You can use Datasette's [permissions mechanism](https://docs.datasette.io/en/stable/authentication.html) to specify which user or users are allowed to access your instance. Here's how to restrict access to just GitHub user `simonw`: ```json { ""allow"": { ""gh_login"": ""simonw"" }, ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."" } } } ``` This `""allow""` block can be positioned at the database, table or query level instead: see [Configuring permissions in metadata.json](https://docs.datasette.io/en/stable/authentication.html#configuring-permissions-in-metadata-json) for details. Note that GitHub allows users to change their username, and it is possible for other people to claim old usernames. If you are concerned that your users may change their usernames you can key the allow blocks against GitHub user IDs instead, which do not change: ```json { ""allow"": { ""gh_id"": ""9599"" } } ``` ## Restricting access to specific GitHub organizations or teams You can also restrict access to users who are members of a specific GitHub organization. You'll need to configure the plugin to check if the user is a member of that organization when they first sign in. You can do that using the `""load_orgs""` plugin configuration option. Then you can use `""allow"": {""gh_orgs"": [...]}` to specify which organizations are allowed access. ```json { ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."", ""load_orgs"": [""your-organization""] } }, ""allow"": { ""gh_orgs"": ""your-organization"" } } ``` If your organization is [arranged into teams](https://help.github.com/en/articles/organizing-members-into-teams) you can restrict access to a specific team like this: ```json { ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."", ""load_teams"": [ ""your-organization/staff"", ""your-organization/engineering"", ] } }, ""allows"": { ""gh_team"": ""your-organization/engineering"" } } ``` ## What to do if a user is removed from an organization or team A user's organization and team memberships are checked once, when they first sign in. Those teams and organizations are then persisted in the user's signed `ds_actor` cookie. This means that if a user is removed from an organization or team but still has a Datasette cookie, they will still be able to access that Datasette instance. You can remedy this by rotating the `DATASETTE_SECRET` environment variable any time you make changes to your GitHub organization members. Changing this value will cause all of your existing users to be signed out, by invalidating their cookies. When they sign back in again their new memberships will be recorded in a new cookie. See [Configuring the secret](https://docs.datasette.io/en/stable/settings.html?highlight=secret#configuring-the-secret) in the Datasette documentation for more details. ","

datasette-auth-github

Datasette plugin that authenticates users against GitHub.

Setup instructions

{
    ""title"": ""datasette-auth-github demo"",
    ""plugins"": {
        ""datasette-auth-github"": {
            ""client_id"": {""$env"": ""GITHUB_CLIENT_ID""},
            ""client_secret"": {""$env"": ""GITHUB_CLIENT_SECRET""}
        }
    }
}

Now you can start Datasette like this, passing in the secrets as environment variables:

$ GITHUB_CLIENT_ID=XXX GITHUB_CLIENT_SECRET=YYY datasette \
    fixtures.db -m metadata.json

Note that hard-coding secrets in metadata.json is a bad idea as they will be visible to anyone who can navigate to /-/metadata. Instead, we use Datasette's mechanism for adding secret plugin configuration options.

By default anonymous users will still be able to interact with Datasette. If you wish all users to have to sign in with a GitHub account first, add this to your metadata.json:

{
    ""allow"": {
        ""id"": ""*""
    },
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""...""
        }
    }
}

The authenticated actor

Visit /-/actor when signed in to see the shape of the authenticated actor. It should look something like this:

{
    ""actor"": {
        ""display"": ""simonw"",
        ""gh_id"": ""9599"",
        ""gh_name"": ""Simon Willison"",
        ""gh_login"": ""simonw"",
        ""gh_email"": ""..."",
        ""gh_orgs"": [
            ""dogsheep"",
            ""datasette-project""
        ],
        ""gh_teams"": [
            ""dogsheep/test""
        ]
    }
}

The gh_orgs and gh_teams properties will only be present if you used load_teams or load_orgs, documented below.

Restricting access to specific users

You can use Datasette's permissions mechanism to specify which user or users are allowed to access your instance. Here's how to restrict access to just GitHub user simonw:

{
    ""allow"": {
        ""gh_login"": ""simonw""
    },
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""...""
        }
    }
}

This ""allow"" block can be positioned at the database, table or query level instead: see Configuring permissions in metadata.json for details.

Note that GitHub allows users to change their username, and it is possible for other people to claim old usernames. If you are concerned that your users may change their usernames you can key the allow blocks against GitHub user IDs instead, which do not change:

{
    ""allow"": {
        ""gh_id"": ""9599""
    }
}

Restricting access to specific GitHub organizations or teams

You can also restrict access to users who are members of a specific GitHub organization.

You'll need to configure the plugin to check if the user is a member of that organization when they first sign in. You can do that using the ""load_orgs"" plugin configuration option.

Then you can use ""allow"": {""gh_orgs"": [...]} to specify which organizations are allowed access.

{
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""..."",
            ""load_orgs"": [""your-organization""]
        }
    },
    ""allow"": {
        ""gh_orgs"": ""your-organization""
    }
}

If your organization is arranged into teams you can restrict access to a specific team like this:

{
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""..."",
            ""load_teams"": [
                ""your-organization/staff"",
                ""your-organization/engineering"",
            ]
        }
    },
    ""allows"": {
        ""gh_team"": ""your-organization/engineering""
    }
}

What to do if a user is removed from an organization or team

A user's organization and team memberships are checked once, when they first sign in. Those teams and organizations are then persisted in the user's signed ds_actor cookie.

This means that if a user is removed from an organization or team but still has a Datasette cookie, they will still be able to access that Datasette instance.

You can remedy this by rotating the DATASETTE_SECRET environment variable any time you make changes to your GitHub organization members.

Changing this value will cause all of your existing users to be signed out, by invalidating their cookies. When they sign back in again their new memberships will be recorded in a new cookie.

See Configuring the secret in the Datasette documentation for more details.

",,,,,, 195696804,MDEwOlJlcG9zaXRvcnkxOTU2OTY4MDQ=,datasette-cors,simonw/datasette-cors,0,9599,https://github.com/simonw/datasette-cors,Datasette plugin for configuring CORS headers,0,2019-07-07T21:03:11Z,2021-02-27T00:31:13Z,2019-07-11T04:40:57Z,,11,9,9,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,9,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,3,"# datasette-cors [![PyPI](https://img.shields.io/pypi/v/datasette-cors.svg)](https://pypi.org/project/datasette-cors/) [![CircleCI](https://circleci.com/gh/simonw/datasette-cors.svg?style=svg)](https://circleci.com/gh/simonw/datasette-cors) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-cors/blob/master/LICENSE) Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors You can use this plugin to allow JavaScript running on a whitelisted set of domains to make `fetch()` calls to the JSON API provided by your Datasette instance. ## Installation pip install datasette-cors ## Configuration You need to add some configuration to your Datasette `metadata.json` file for this plugin to take effect. To whitelist specific domains, use this: ```json { ""plugins"": { ""datasette-cors"": { ""hosts"": [""https://www.example.com""] } } } ``` You can also whitelist patterns like this: ```json { ""plugins"": { ""datasette-cors"": { ""host_wildcards"": [""https://*.example.com""] } } } ``` ## Testing it To test this plugin out, run it locally by saving one of the above examples as `metadata.json` and running this: $ datasette --memory -m metadata.json Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following: ```javascript fetch(""http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29"").then(r => r.json()).then(console.log) ``` If the plugin is running correctly, you will see the JSON response output to the console. ","

datasette-cors

Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors

You can use this plugin to allow JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance.

Installation

pip install datasette-cors

Configuration

You need to add some configuration to your Datasette metadata.json file for this plugin to take effect.

To whitelist specific domains, use this:

{
    ""plugins"": {
        ""datasette-cors"": {
            ""hosts"": [""https://www.example.com""]
        }
    }
}

You can also whitelist patterns like this:

{
    ""plugins"": {
        ""datasette-cors"": {
            ""host_wildcards"": [""https://*.example.com""]
        }
    }
}

Testing it

To test this plugin out, run it locally by saving one of the above examples as metadata.json and running this:

$ datasette --memory -m metadata.json

Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following:

fetch(""http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29"").then(r => r.json()).then(console.log)

If the plugin is running correctly, you will see the JSON response output to the console.

",,,,,, 207630174,MDEwOlJlcG9zaXRvcnkyMDc2MzAxNzQ=,datasette-rure,simonw/datasette-rure,0,9599,https://github.com/simonw/datasette-rure,Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine,0,2019-09-10T18:09:33Z,2020-12-04T04:26:53Z,2019-09-11T22:59:38Z,,19,4,4,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""sqlite"", ""regular-expressions"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,4,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-rure [![PyPI](https://img.shields.io/pypi/v/datasette-rure.svg)](https://pypi.org/project/datasette-rure/) [![CircleCI](https://circleci.com/gh/simonw/datasette-rure.svg?style=svg)](https://circleci.com/gh/simonw/datasette-rure) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-rure/blob/master/LICENSE) Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine Install this plugin in the same environment as Datasette to enable the `regexp()` SQL function. $ pip install datasette-rure The plugin is built on top of the [rure-python](https://github.com/davidblewett/rure-python) library by David Blewett. ## regexp() to test regular expressions You can test if a value matches a regular expression like this: select regexp('hi.*there', 'hi there') -- returns 1 select regexp('not.*there', 'hi there') -- returns 0 You can also use SQLite's custom syntax to run matches: select 'hi there' REGEXP 'hi.*there' -- returns 1 This means you can select rows based on regular expression matches - for example, to select every article where the title begins with an E or an F: select * from articles where title REGEXP '^[EF]' Try this out: [REGEXP interactive demo](https://datasette-rure-demo.datasette.io/24ways?sql=select+*+from+articles+where+title+REGEXP+%27%5E%5BEF%5D%27) ## regexp_match() to extract groups You can extract captured subsets of a pattern using `regexp_match()`. select regexp_match('.*( and .*)', title) as n from articles where n is not null -- Returns the ' and X' component of any matching titles, e.g. -- and Recognition -- and Transitions Their Place -- etc This will return the first parenthesis match when called with two arguments. You can call it with three arguments to indicate which match you would like to extract: select regexp_match('.*(and)(.*)', title, 2) as n from articles where n is not null The function will return `null` for invalid inputs e.g. a pattern without capture groups. Try this out: [regexp_match() interactive demo](https://datasette-rure-demo.datasette.io/24ways?sql=select+%27WHY+%27+%7C%7C+regexp_match%28%27Why+%28.*%29%27%2C+title%29+as+t+from+articles+where+t+is+not+null) ## regexp_matches() to extract multiple matches at once The `regexp_matches()` function can be used to extract multiple patterns from a single string. The result is returned as a JSON array, which can then be further processed using SQLite's [JSON functions](https://www.sqlite.org/json1.html). The first argument is a regular expression with named capture groups. The second argument is the string to be matched. select regexp_matches( 'hello (?P\w+) the (?P\w+)', 'hello bob the dog, hello maggie the cat, hello tarquin the otter' ) This will return a list of JSON objects, each one representing the named captures from the original regular expression: [ {""name"": ""bob"", ""species"": ""dog""}, {""name"": ""maggie"", ""species"": ""cat""}, {""name"": ""tarquin"", ""species"": ""otter""} ] Try this out: [regexp_matches() interactive demo](https://datasette-rure-demo.datasette.io/24ways?sql=select+regexp_matches%28%0D%0A++++%27hello+%28%3FP%3Cname%3E%5Cw%2B%29+the+%28%3FP%3Cspecies%3E%5Cw%2B%29%27%2C%0D%0A++++%27hello+bob+the+dog%2C+hello+maggie+the+cat%2C+hello+tarquin+the+otter%27%0D%0A%29) ","

datasette-rure

Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine

Install this plugin in the same environment as Datasette to enable the regexp() SQL function.

$ pip install datasette-rure

The plugin is built on top of the rure-python library by David Blewett.

regexp() to test regular expressions

You can test if a value matches a regular expression like this:

select regexp('hi.*there', 'hi there')
-- returns 1
select regexp('not.*there', 'hi there')
-- returns 0

You can also use SQLite's custom syntax to run matches:

select 'hi there' REGEXP 'hi.*there'
-- returns 1

This means you can select rows based on regular expression matches - for example, to select every article where the title begins with an E or an F:

select * from articles where title REGEXP '^[EF]'

Try this out: REGEXP interactive demo

regexp_match() to extract groups

You can extract captured subsets of a pattern using regexp_match().

select regexp_match('.*( and .*)', title) as n from articles where n is not null
-- Returns the ' and X' component of any matching titles, e.g.
--     and Recognition
--     and Transitions Their Place
-- etc

This will return the first parenthesis match when called with two arguments. You can call it with three arguments to indicate which match you would like to extract:

select regexp_match('.*(and)(.*)', title, 2) as n from articles where n is not null

The function will return null for invalid inputs e.g. a pattern without capture groups.

Try this out: regexp_match() interactive demo

regexp_matches() to extract multiple matches at once

The regexp_matches() function can be used to extract multiple patterns from a single string. The result is returned as a JSON array, which can then be further processed using SQLite's JSON functions.

The first argument is a regular expression with named capture groups. The second argument is the string to be matched.

select regexp_matches(
    'hello (?P<name>\w+) the (?P<species>\w+)',
    'hello bob the dog, hello maggie the cat, hello tarquin the otter'
)

This will return a list of JSON objects, each one representing the named captures from the original regular expression:

[
    {""name"": ""bob"", ""species"": ""dog""},
    {""name"": ""maggie"", ""species"": ""cat""},
    {""name"": ""tarquin"", ""species"": ""otter""}
]

Try this out: regexp_matches() interactive demo

",,,,,, 209091256,MDEwOlJlcG9zaXRvcnkyMDkwOTEyNTY=,datasette-atom,simonw/datasette-atom,0,9599,https://github.com/simonw/datasette-atom,Datasette plugin that adds a .atom output format,0,2019-09-17T15:31:01Z,2021-03-26T02:06:51Z,2021-01-24T23:59:36Z,,47,10,10,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,10,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-atom [![PyPI](https://img.shields.io/pypi/v/datasette-atom.svg)](https://pypi.org/project/datasette-atom/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-atom?include_prereleases&label=changelog)](https://github.com/simonw/datasette-atom/releases) [![Tests](https://github.com/simonw/datasette-atom/workflows/Test/badge.svg)](https://github.com/simonw/datasette-atom/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-atom/blob/main/LICENSE) Datasette plugin that adds support for generating [Atom feeds](https://validator.w3.org/feed/docs/atom.html) with the results of a SQL query. ## Installation Install this plugin in the same environment as Datasette to enable the `.atom` output extension. $ pip install datasette-atom ## Usage To create an Atom feed you need to define a custom SQL query that returns a required set of columns: * `atom_id` - a unique ID for each row. [This article](https://web.archive.org/web/20080211143232/http://diveintomark.org/archives/2004/05/28/howto-atom-id) has suggestions about ways to create these IDs. * `atom_title` - a title for that row. * `atom_updated` - an [RFC 3339](http://www.faqs.org/rfcs/rfc3339.html) timestamp representing the last time the entry was modified in a significant way. This can usually be the time that the row was created. The following columns are optional: * `atom_content` - content that should be shown in the feed. This will be treated as a regular string, so any embedded HTML tags will be escaped when they are displayed. * `atom_content_html` - content that should be shown in the feed. This will be treated as an HTML string, and will be sanitized using [Bleach](https://github.com/mozilla/bleach) to ensure it does not have any malicious code in it before being returned as part of a `` Atom element. If both are provided, this will be used in place of `atom_content`. * `atom_link` - a URL that should be used as the link that the feed entry points to. * `atom_author_name` - the name of the author of the entry. If you provide this you can also provide `atom_author_uri` and `atom_author_email` with a URL and e-mail address for that author. A query that returns these columns can then be returned as an Atom feed by adding the `.atom` extension. ## Example Here is an example SQL query which generates an Atom feed for new entries on [www.niche-museums.com](https://www.niche-museums.com/): ```sql select 'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id, name as atom_title, created as atom_updated, 'https://www.niche-museums.com/browse/museums/' || id as atom_link, coalesce( '', '' ) || '

' || description || '

' as atom_content_html from museums order by created desc limit 15 ``` You can try this query by [pasting it in here](https://www.niche-museums.com/browse) - then click the `.atom` link to see it as an Atom feed. ## Using a canned query Datasette's [canned query mechanism](https://docs.datasette.io/en/stable/sql_queries.html#canned-queries) is a useful way to configure feeds. If a canned query definition has a `title` that will be used as the title of the Atom feed. Here's an example, defined using a `metadata.yaml` file: ```yaml databases: browse: queries: feed: title: Niche Museums sql: |- select 'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id, name as atom_title, created as atom_updated, 'https://www.niche-museums.com/browse/museums/' || id as atom_link, coalesce( '', '' ) || '

' || description || '

' as atom_content_html from museums order by created desc limit 15 ``` ## Disabling HTML filtering The HTML allow-list used by Bleach for the `atom_content_html` column can be found in the `clean(html)` function at the bottom of [datasette_atom/__init__.py](https://github.com/simonw/datasette-atom/blob/main/datasette_atom/__init__.py). You can disable Bleach entirely for Atom feeds generated using a canned query. You should only do this if you are certain that no user-provided HTML could be included in that value. Here's how to do that in `metadata.json`: ```json { ""plugins"": { ""datasette-atom"": { ""allow_unsafe_html_in_canned_queries"": true } } } ``` Setting this to `true` will disable Bleach filtering for all canned queries across all databases. You can disable Bleach filtering just for a specific list of canned queries like so: ```json { ""plugins"": { ""datasette-atom"": { ""allow_unsafe_html_in_canned_queries"": { ""museums"": [""latest"", ""moderation""] } } } } ``` This will disable Bleach just for the canned queries called `latest` and `moderation` in the `museums.db` database. ","

datasette-atom

Datasette plugin that adds support for generating Atom feeds with the results of a SQL query.

Installation

Install this plugin in the same environment as Datasette to enable the .atom output extension.

$ pip install datasette-atom

Usage

To create an Atom feed you need to define a custom SQL query that returns a required set of columns:

  • atom_id - a unique ID for each row. This article has suggestions about ways to create these IDs.
  • atom_title - a title for that row.
  • atom_updated - an RFC 3339 timestamp representing the last time the entry was modified in a significant way. This can usually be the time that the row was created.

The following columns are optional:

  • atom_content - content that should be shown in the feed. This will be treated as a regular string, so any embedded HTML tags will be escaped when they are displayed.
  • atom_content_html - content that should be shown in the feed. This will be treated as an HTML string, and will be sanitized using Bleach to ensure it does not have any malicious code in it before being returned as part of a <content type=""html""> Atom element. If both are provided, this will be used in place of atom_content.
  • atom_link - a URL that should be used as the link that the feed entry points to.
  • atom_author_name - the name of the author of the entry. If you provide this you can also provide atom_author_uri and atom_author_email with a URL and e-mail address for that author.

A query that returns these columns can then be returned as an Atom feed by adding the .atom extension.

Example

Here is an example SQL query which generates an Atom feed for new entries on www.niche-museums.com:

select
  'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id,
  name as atom_title,
  created as atom_updated,
  'https://www.niche-museums.com/browse/museums/' || id as atom_link,
  coalesce(
    '<img src=""' || photo_url || '?w=800&amp;h=400&amp;fit=crop&amp;auto=compress"">',
    ''
  ) || '<p>' || description || '</p>' as atom_content_html
from
  museums
order by
  created desc
limit
  15

You can try this query by pasting it in here - then click the .atom link to see it as an Atom feed.

Using a canned query

Datasette's canned query mechanism is a useful way to configure feeds. If a canned query definition has a title that will be used as the title of the Atom feed.

Here's an example, defined using a metadata.yaml file:

databases:
  browse:
    queries:
      feed:
        title: Niche Museums
        sql: |-
          select
            'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id,
            name as atom_title,
            created as atom_updated,
            'https://www.niche-museums.com/browse/museums/' || id as atom_link,
            coalesce(
              '<img src=""' || photo_url || '?w=800&amp;h=400&amp;fit=crop&amp;auto=compress"">',
              ''
            ) || '<p>' || description || '</p>' as atom_content_html
          from
            museums
          order by
            created desc
          limit
            15

Disabling HTML filtering

The HTML allow-list used by Bleach for the atom_content_html column can be found in the clean(html) function at the bottom of datasette_atom/init.py.

You can disable Bleach entirely for Atom feeds generated using a canned query. You should only do this if you are certain that no user-provided HTML could be included in that value.

Here's how to do that in metadata.json:

{
  ""plugins"": {
    ""datasette-atom"": {
      ""allow_unsafe_html_in_canned_queries"": true
    }
  }
}

Setting this to true will disable Bleach filtering for all canned queries across all databases.

You can disable Bleach filtering just for a specific list of canned queries like so:

{
  ""plugins"": {
    ""datasette-atom"": {
      ""allow_unsafe_html_in_canned_queries"": {
        ""museums"": [""latest"", ""moderation""]
      }
    }
  }
}

This will disable Bleach just for the canned queries called latest and moderation in the museums.db database.

",,,,,, 214299267,MDEwOlJlcG9zaXRvcnkyMTQyOTkyNjc=,datasette-render-timestamps,simonw/datasette-render-timestamps,0,9599,https://github.com/simonw/datasette-render-timestamps,Datasette plugin for rendering timestamps,0,2019-10-10T22:50:50Z,2020-10-17T11:09:42Z,2020-03-22T17:57:17Z,,17,4,4,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",1,0,4,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,2,"# datasette-render-timestamps [![PyPI](https://img.shields.io/pypi/v/datasette-render-timestamps.svg)](https://pypi.org/project/datasette-render-timestamps/) [![CircleCI](https://circleci.com/gh/simonw/datasette-render-timestamps.svg?style=svg)](https://circleci.com/gh/simonw/datasette-render-timestamps) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-render-timestamps/blob/master/LICENSE) Datasette plugin for rendering timestamps. ## Installation Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-render-timestamps The plugin will then look out for integer numbers that are likely to be timestamps - anything that would be a number of seconds from 5 years ago to 5 years in the future. These will then be rendered in a more readable format. ## Configuration You can disable automatic column detection in favour of explicitly listing the columns that you would like to render using [plugin configuration](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file. Add a `""datasette-render-timestamps""` configuration block and use a `""columns""` key to list the columns you would like to treat as timestamp values: ```json { ""plugins"": { ""datasette-render-timestamps"": { ""columns"": [""created"", ""updated""] } } } ``` This will cause any `created` or `updated` columns in any table to be treated as timestamps and rendered. Save this to `metadata.json` and run datasette with the `--metadata` flag to load this configuration: datasette serve mydata.db --metadata metadata.json To disable automatic timestamp detection entirely, you can use `""columnns"": []`. This configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the `entries` table in the `news.db` database: ```json { ""databases"": { ""news"": { ""tables"": { ""entries"": { ""plugins"": { ""datasette-render-timestamps"": { ""columns"": [""created"", ""updated""] } } } } } } } ``` And here's how to apply it to every `created` column in every table in the `news.db` database: ```json { ""databases"": { ""news"": { ""plugins"": { ""datasette-render-timestamps"": { ""columns"": [""created"", ""updated""] } } } } } ``` ### Customizing the date format The default format is `%B %d, %Y - %H:%M:%S UTC` which renders for example: `October 10, 2019 - 07:18:29 UTC`. If you want another format, the date format can be customized using plugin configuration. Any format string supported by [strftime](http://strftime.org/) may be used. For example: ```json { ""plugins"": { ""datasette-render-timestamps"": { ""format"": ""%Y-%m-%d-%H:%M:%S"" } } } ``` ","

datasette-render-timestamps

Datasette plugin for rendering timestamps.

Installation

Install this plugin in the same environment as Datasette to enable this new functionality:

pip install datasette-render-timestamps

The plugin will then look out for integer numbers that are likely to be timestamps - anything that would be a number of seconds from 5 years ago to 5 years in the future.

These will then be rendered in a more readable format.

Configuration

You can disable automatic column detection in favour of explicitly listing the columns that you would like to render using plugin configuration in a metadata.json file.

Add a ""datasette-render-timestamps"" configuration block and use a ""columns"" key to list the columns you would like to treat as timestamp values:

{
    ""plugins"": {
        ""datasette-render-timestamps"": {
            ""columns"": [""created"", ""updated""]
        }
    }
}

This will cause any created or updated columns in any table to be treated as timestamps and rendered.

Save this to metadata.json and run datasette with the --metadata flag to load this configuration:

datasette serve mydata.db --metadata metadata.json

To disable automatic timestamp detection entirely, you can use ""columnns"": [].

This configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the entries table in the news.db database:

{
    ""databases"": {
        ""news"": {
            ""tables"": {
                ""entries"": {
                    ""plugins"": {
                        ""datasette-render-timestamps"": {
                            ""columns"": [""created"", ""updated""]
                        }
                    }
                }
            }
        }
    }
}

And here's how to apply it to every created column in every table in the news.db database:

{
    ""databases"": {
        ""news"": {
            ""plugins"": {
                ""datasette-render-timestamps"": {
                    ""columns"": [""created"", ""updated""]
                }
            }
        }
    }
}

Customizing the date format

The default format is %B %d, %Y - %H:%M:%S UTC which renders for example: October 10, 2019 - 07:18:29 UTC. If you want another format, the date format can be customized using plugin configuration. Any format string supported by strftime may be used. For example:

{
    ""plugins"": {
        ""datasette-render-timestamps"": {
            ""format"": ""%Y-%m-%d-%H:%M:%S""
        }
    }
}
",,,,,, 217216787,MDEwOlJlcG9zaXRvcnkyMTcyMTY3ODc=,datasette-haversine,simonw/datasette-haversine,0,9599,https://github.com/simonw/datasette-haversine,Datasette plugin that adds a custom SQL function for haversine distances,0,2019-10-24T05:16:14Z,2021-07-28T20:13:38Z,2021-07-28T20:14:24Z,,8,1,1,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-haversine [![PyPI](https://img.shields.io/pypi/v/datasette-haversine.svg)](https://pypi.org/project/datasette-haversine/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-haversine?include_prereleases&label=changelog)](https://github.com/simonw/datasette-haversine/releases) [![Tests](https://github.com/simonw/datasette-haversine/workflows/Test/badge.svg)](https://github.com/simonw/datasette-haversine/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-haversine/blob/main/LICENSE) Datasette plugin that adds a custom SQL function for haversine distances Install this plugin in the same environment as Datasette to enable the `haversine()` SQL function. $ pip install datasette-haversine The plugin is built on top of the [haversine](https://github.com/mapado/haversine) library. ## haversine() to calculate distances ```sql select haversine(lat1, lon1, lat2, lon2); ``` This will return the distance in kilometers between the point defined by `(lat1, lon1)` and the point defined by `(lat2, lon2)`. ## Custom units By default `haversine()` returns results in km. You can pass an optional third argument to get results in a different unit: - `ft` for feet - `m` for meters - `in` for inches - `mi` for miles - `nmi` for nautical miles - `km` for kilometers (the default) ```sql select haversine(lat1, lon1, lat2, lon2, 'mi'); ``` ","

datasette-haversine

Datasette plugin that adds a custom SQL function for haversine distances

Install this plugin in the same environment as Datasette to enable the haversine() SQL function.

$ pip install datasette-haversine

The plugin is built on top of the haversine library.

haversine() to calculate distances

select haversine(lat1, lon1, lat2, lon2);

This will return the distance in kilometers between the point defined by (lat1, lon1) and the point defined by (lat2, lon2).

Custom units

By default haversine() returns results in km. You can pass an optional third argument to get results in a different unit:

  • ft for feet
  • m for meters
  • in for inches
  • mi for miles
  • nmi for nautical miles
  • km for kilometers (the default)
select haversine(lat1, lon1, lat2, lon2, 'mi');
",,,,,, 221802296,MDEwOlJlcG9zaXRvcnkyMjE4MDIyOTY=,datasette-template-sql,simonw/datasette-template-sql,0,9599,https://github.com/simonw/datasette-template-sql,Datasette plugin for executing SQL queries from templates,0,2019-11-14T23:05:34Z,2021-05-18T17:58:47Z,2021-05-18T17:58:44Z,https://datasette.io/plugins/datasette-template-sql,23,6,6,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,6,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-template-sql [![PyPI](https://img.shields.io/pypi/v/datasette-template-sql.svg)](https://pypi.org/project/datasette-template-sql/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-template-sql?include_prereleases&label=changelog)](https://github.com/simonw/datasette-template-sql/releases) [![Tests](https://github.com/simonw/datasette-template-sql/workflows/Test/badge.svg)](https://github.com/simonw/datasette-template-sql/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-template-sql/blob/main/LICENSE) Datasette plugin for executing SQL queries from templates. ## Examples [datasette.io](https://datasette.io/) uses this plugin extensively with [custom page templates](https://docs.datasette.io/en/stable/custom_templates.html#custom-pages), check out [simonw/datasette.io](https://github.com/simonw/datasette.io) to see how it works. [www.niche-museums.com](https://www.niche-museums.com/) uses this plugin to run a custom themed website on top of Datasette. The full source code for the site [is here](https://github.com/simonw/museums) - see also [niche-museums.com, powered by Datasette](https://simonwillison.net/2019/Nov/25/niche-museums/). [simonw/til](https://github.com/simonw/til) is another simple example, described in [Using a self-rewriting README powered by GitHub Actions to track TILs](https://simonwillison.net/2020/Apr/20/self-rewriting-readme/). ## Installation Run this command to install the plugin in the same environment as Datasette: $ pip install datasette-template-sql ## Usage This plugin makes a new function, `sql(sql_query)`, available to your Datasette templates. You can use it like this: ```html+jinja {% for row in sql(""select 1 + 1 as two, 2 * 4 as eight"") %} {% for key in row.keys() %} {{ key }}: {{ row[key] }}
{% endfor %} {% endfor %} ``` The plugin will execute SQL against the current database for the page in `database.html`, `table.html` and `row.html` templates. If a template does not have a current database (`index.html` for example) the query will execute against the first attached database. ### Queries with arguments You can construct a SQL query using `?` or `:name` parameter syntax by passing a list or dictionary as a second argument: ```html+jinja {% for row in sql(""select distinct topic from til order by topic"") %}

{{ row.topic }}

    {% for til in sql(""select * from til where topic = ?"", [row.topic]) %}
  • {{ til.title }} - {{ til.created[:10] }}
  • {% endfor %}
{% endfor %} ``` Here's the same example using the `:topic` style of parameters: ```html+jinja {% for row in sql(""select distinct topic from til order by topic"") %}

{{ row.topic }}

    {% for til in sql(""select * from til where topic = :topic"", {""topic"": row.topic}) %}
  • {{ til.title }} - {{ til.created[:10] }}
  • {% endfor %}
{% endfor %} ``` ### Querying a different database You can pass an optional `database=` argument to specify a named database to use for the query. For example, if you have attached a `news.db` database you could use this: ```html+jinja {% for article in sql( ""select headline, date, summary from articles order by date desc limit 5"", database=""news"" ) %}

{{ article.headline }}

{{ article.date }}

{{ article.summary }}

{% endfor %} ``` ","

datasette-template-sql

Datasette plugin for executing SQL queries from templates.

Examples

datasette.io uses this plugin extensively with custom page templates, check out simonw/datasette.io to see how it works.

www.niche-museums.com uses this plugin to run a custom themed website on top of Datasette. The full source code for the site is here - see also niche-museums.com, powered by Datasette.

simonw/til is another simple example, described in Using a self-rewriting README powered by GitHub Actions to track TILs.

Installation

Run this command to install the plugin in the same environment as Datasette:

$ pip install datasette-template-sql

Usage

This plugin makes a new function, sql(sql_query), available to your Datasette templates.

You can use it like this:

{% for row in sql(""select 1 + 1 as two, 2 * 4 as eight"") %}
    {% for key in row.keys() %}
        {{ key }}: {{ row[key] }}<br>
    {% endfor %}
{% endfor %}

The plugin will execute SQL against the current database for the page in database.html, table.html and row.html templates. If a template does not have a current database (index.html for example) the query will execute against the first attached database.

Queries with arguments

You can construct a SQL query using ? or :name parameter syntax by passing a list or dictionary as a second argument:

{% for row in sql(""select distinct topic from til order by topic"") %}
    <h2>{{ row.topic }}</h2>
    <ul>
        {% for til in sql(""select * from til where topic = ?"", [row.topic]) %}
            <li><a href=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li>
        {% endfor %}
    </ul>
{% endfor %}

Here's the same example using the :topic style of parameters:

{% for row in sql(""select distinct topic from til order by topic"") %}
    <h2>{{ row.topic }}</h2>
    <ul>
        {% for til in sql(""select * from til where topic = :topic"", {""topic"": row.topic}) %}
            <li><a href=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li>
        {% endfor %}
    </ul>
{% endfor %}

Querying a different database

You can pass an optional database= argument to specify a named database to use for the query. For example, if you have attached a news.db database you could use this:

{% for article in sql(
    ""select headline, date, summary from articles order by date desc limit 5"",
    database=""news""
) %}
    <h3>{{ article.headline }}</h2>
    <p class=""date"">{{ article.date }}</p>
    <p>{{ article.summary }}</p>
{% endfor %}
",,,,,, 228485806,MDEwOlJlcG9zaXRvcnkyMjg0ODU4MDY=,datasette-configure-asgi,simonw/datasette-configure-asgi,0,9599,https://github.com/simonw/datasette-configure-asgi,Datasette plugin for configuring arbitrary ASGI middleware,0,2019-12-16T22:17:10Z,2020-08-25T15:54:32Z,2019-12-16T22:19:49Z,,6,1,1,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""asgi"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-configure-asgi [![PyPI](https://img.shields.io/pypi/v/datasette-configure-asgi.svg)](https://pypi.org/project/datasette-configure-asgi/) [![CircleCI](https://circleci.com/gh/simonw/datasette-configure-asgi.svg?style=svg)](https://circleci.com/gh/simonw/datasette-configure-asgi) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-configure-asgi/blob/master/LICENSE) Datasette plugin for configuring arbitrary ASGI middleware ## Installation pip install datasette-configure-asgi ## Usage This plugin only takes effect if your `metadata.json` file contains relevant top-level plugin configuration in a `""datasette-configure-asgi""` configuration key. For example, to wrap your Datasette instance in the `asgi-log-to-sqlite` middleware configured to write logs to `/tmp/log.db` you would use the following: ```json { ""plugins"": { ""datasette-configure-asgi"": [ { ""class"": ""asgi_log_to_sqlite.AsgiLogToSqlite"", ""args"": { ""file"": ""/tmp/log.db"" } } ] } } ``` The `""datasette-configure-asgi""` key should be a list of JSON objects. Each object should have a `""class""` key indicating the class to be used, and an optional `""args""` key providing any necessary arguments to be passed to that class constructor. ## Plugin structure This plugin can be used to wrap your Datasette instance in any ASGI middleware that conforms to the following structure: ```python class SomeAsgiMiddleware: def __init__(self, app, arg1, arg2): self.app = app self.arg1 = arg1 self.arg2 = arg2 async def __call__(self, scope, receive, send): start = time.time() await self.app(scope, receive, send) end = time.time() print(""Time taken: {}"".format(end - start)) ``` So the middleware is a class with a constructor which takes the wrapped application as a first argument, `app`, followed by further named arguments to configure the middleware. It provides an `async def __call__(self, scope, receive, send)` method to implement the middleware's behavior. ","

datasette-configure-asgi

Datasette plugin for configuring arbitrary ASGI middleware

Installation

pip install datasette-configure-asgi

Usage

This plugin only takes effect if your metadata.json file contains relevant top-level plugin configuration in a ""datasette-configure-asgi"" configuration key.

For example, to wrap your Datasette instance in the asgi-log-to-sqlite middleware configured to write logs to /tmp/log.db you would use the following:

{
    ""plugins"": {
        ""datasette-configure-asgi"": [
            {
                ""class"": ""asgi_log_to_sqlite.AsgiLogToSqlite"",
                ""args"": {
                    ""file"": ""/tmp/log.db""
                }
            }
        ]
    }
}

The ""datasette-configure-asgi"" key should be a list of JSON objects. Each object should have a ""class"" key indicating the class to be used, and an optional ""args"" key providing any necessary arguments to be passed to that class constructor.

Plugin structure

This plugin can be used to wrap your Datasette instance in any ASGI middleware that conforms to the following structure:

class SomeAsgiMiddleware:
    def __init__(self, app, arg1, arg2):
        self.app = app
        self.arg1 = arg1
        self.arg2 = arg2

    async def __call__(self, scope, receive, send):
        start = time.time()
        await self.app(scope, receive, send)
        end = time.time()
        print(""Time taken: {}"".format(end - start))

So the middleware is a class with a constructor which takes the wrapped application as a first argument, app, followed by further named arguments to configure the middleware. It provides an async def __call__(self, scope, receive, send) method to implement the middleware's behavior.

",,,,,, 240815938,MDEwOlJlcG9zaXRvcnkyNDA4MTU5Mzg=,shapefile-to-sqlite,simonw/shapefile-to-sqlite,0,9599,https://github.com/simonw/shapefile-to-sqlite,Load shapefiles into a SQLite (optionally SpatiaLite) database,0,2020-02-16T01:55:29Z,2021-03-26T08:39:43Z,2020-08-23T06:00:41Z,,54,15,15,Python,1,1,1,1,0,0,0,0,3,apache-2.0,"[""sqlite"", ""gis"", ""spatialite"", ""shapefiles"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,3,15,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# shapefile-to-sqlite [![PyPI](https://img.shields.io/pypi/v/shapefile-to-sqlite.svg)](https://pypi.org/project/shapefile-to-sqlite/) [![CircleCI](https://circleci.com/gh/simonw/shapefile-to-sqlite.svg?style=svg)](https://circleci.com/gh/simonw/shapefile-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/shapefile-to-sqlite/blob/main/LICENSE) Load shapefiles into a SQLite (optionally SpatiaLite) database. Project background: [Things I learned about shapefiles building shapefile-to-sqlite](https://simonwillison.net/2020/Feb/19/shapefile-to-sqlite/) ## How to install $ pip install shapefile-to-sqlite ## How to use You can run this tool against a shapefile file like so: $ shapefile-to-sqlite my.db features.shp This will load the geometries as GeoJSON in a text column. ## Using with SpatiaLite If you have [SpatiaLite](https://www.gaia-gis.it/fossil/libspatialite/index) available you can load them as SpatiaLite geometries like this: $ shapefile-to-sqlite my.db features.shp --spatialite The data will be loaded into a table called `features` - based on the name of the shapefile. You can specify an alternative table name using `--table`: $ shapefile-to-sqlite my.db features.shp --table=places --spatialite The tool will search for the SpatiaLite module in the following locations: - `/usr/lib/x86_64-linux-gnu/mod_spatialite.so` - `/usr/local/lib/mod_spatialite.dylib` If you have installed the module in another location, you can use the `--spatialite_mod=xxx` option to specify where: $ shapefile-to-sqlite my.db features.shp \ --spatialite_mod=/usr/lib/mod_spatialite.dylib You can use the `--spatial-index` option to create a spatial index on the `geometry` column: $ shapefile-to-sqlite my.db features.shp --spatial-index You can omit `--spatialite` if you use either `--spatialite-mod` or `--spatial-index`. ## Projections By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the [GeoJSON specification](https://tools.ietf.org/html/rfc7946). If you want it to leave the data in whatever projection was used by the shapefile, use the `--crs=keep` option. You can convert the data to another output projection by passing it to the `--crs` option. For example, to convert to [EPSG:2227](https://epsg.io/2227) (California zone 3) use `--crs=espg:2227`. The full list of formats accepted by the `--crs` option is [documented here](https://pyproj4.github.io/pyproj/stable/api/crs.html#pyproj.crs.CRS.__init__). ## Extracting columns If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the `-c` option: $ shapefile-to-sqlite my.db features.shp -c agency This will create a `agency` table with `id` and `name` columns, and will create the `agency` column in your main table as an integer foreign key reference to that table. The `-c` option can be used multiple times. [CPAD_2020a_Units](https://calands.datasettes.com/calands/CPAD_2020a_Units) is an example of a table created using the `-c` option. ","

shapefile-to-sqlite

Load shapefiles into a SQLite (optionally SpatiaLite) database.

Project background: Things I learned about shapefiles building shapefile-to-sqlite

How to install

$ pip install shapefile-to-sqlite

How to use

You can run this tool against a shapefile file like so:

$ shapefile-to-sqlite my.db features.shp

This will load the geometries as GeoJSON in a text column.

Using with SpatiaLite

If you have SpatiaLite available you can load them as SpatiaLite geometries like this:

$ shapefile-to-sqlite my.db features.shp --spatialite

The data will be loaded into a table called features - based on the name of the shapefile. You can specify an alternative table name using --table:

$ shapefile-to-sqlite my.db features.shp --table=places --spatialite

The tool will search for the SpatiaLite module in the following locations:

  • /usr/lib/x86_64-linux-gnu/mod_spatialite.so
  • /usr/local/lib/mod_spatialite.dylib

If you have installed the module in another location, you can use the --spatialite_mod=xxx option to specify where:

$ shapefile-to-sqlite my.db features.shp \
    --spatialite_mod=/usr/lib/mod_spatialite.dylib

You can use the --spatial-index option to create a spatial index on the geometry column:

$ shapefile-to-sqlite my.db features.shp --spatial-index

You can omit --spatialite if you use either --spatialite-mod or --spatial-index.

Projections

By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the GeoJSON specification.

If you want it to leave the data in whatever projection was used by the shapefile, use the --crs=keep option.

You can convert the data to another output projection by passing it to the --crs option. For example, to convert to EPSG:2227 (California zone 3) use --crs=espg:2227.

The full list of formats accepted by the --crs option is documented here.

Extracting columns

If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the -c option:

$ shapefile-to-sqlite my.db features.shp -c agency

This will create a agency table with id and name columns, and will create the agency column in your main table as an integer foreign key reference to that table.

The -c option can be used multiple times.

CPAD_2020a_Units is an example of a table created using the -c option.

",,,,,, 242260583,MDEwOlJlcG9zaXRvcnkyNDIyNjA1ODM=,datasette-mask-columns,simonw/datasette-mask-columns,0,9599,https://github.com/simonw/datasette-mask-columns,Datasette plugin that masks specified database columns,0,2020-02-22T01:29:16Z,2021-06-10T19:50:37Z,2021-06-10T19:51:02Z,https://datasette.io/plugins/datasette-mask-columns,15,2,2,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-mask-columns [![PyPI](https://img.shields.io/pypi/v/datasette-mask-columns.svg)](https://pypi.org/project/datasette-mask-columns/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-mask-columns?include_prereleases&label=changelog)](https://github.com/simonw/datasette-mask-columns/releases) [![Tests](https://github.com/simonw/datasette-mask-columns/workflows/Test/badge.svg)](https://github.com/simonw/datasette-mask-columns/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-mask-columns/blob/main/LICENSE) Datasette plugin that masks specified database columns ## Installation pip install datasette-mask-columns This depends on plugin hook changes in a not-yet released branch of Datasette. See [issue #678](https://github.com/simonw/datasette/issues/678) for details. ## Usage In your `metadata.json` file add a section like this describing the database and table in which you wish to mask columns: ```json { ""databases"": { ""my-database"": { ""plugins"": { ""datasette-mask-columns"": { ""users"": [""password""] } } } } } ``` All SQL queries against the `users` table in `my-database.db` will now return `null` for the `password` column, no matter what value that column actually holds. The table page for `users` will display the text `REDACTED` in the masked column. This visual hint will only be available on the table page; it will not display his text for arbitrary queries against the table. ","

datasette-mask-columns

Datasette plugin that masks specified database columns

Installation

pip install datasette-mask-columns

This depends on plugin hook changes in a not-yet released branch of Datasette. See issue #678 for details.

Usage

In your metadata.json file add a section like this describing the database and table in which you wish to mask columns:

{
    ""databases"": {
        ""my-database"": {
            ""plugins"": {
                ""datasette-mask-columns"": {
                    ""users"": [""password""]
                }
            }
        }
    }
}

All SQL queries against the users table in my-database.db will now return null for the password column, no matter what value that column actually holds.

The table page for users will display the text REDACTED in the masked column. This visual hint will only be available on the table page; it will not display his text for arbitrary queries against the table.

",,,,,, 243887036,MDEwOlJlcG9zaXRvcnkyNDM4ODcwMzY=,datasette-configure-fts,simonw/datasette-configure-fts,0,9599,https://github.com/simonw/datasette-configure-fts,Datasette plugin for enabling full-text search against selected table columns,0,2020-02-29T01:50:57Z,2020-11-01T02:59:12Z,2020-11-01T02:59:10Z,,42,2,2,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,2,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-configure-fts [![PyPI](https://img.shields.io/pypi/v/datasette-configure-fts.svg)](https://pypi.org/project/datasette-configure-fts/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-configure-fts?include_prereleases&label=changelog)](https://github.com/simonw/datasette-configure-fts/releases) [![Tests](https://github.com/simonw/datasette-configure-fts/workflows/Test/badge.svg)](https://github.com/simonw/datasette-configure-fts/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-configure-fts/blob/main/LICENSE) Datasette plugin for enabling full-text search against selected table columns ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-configure-fts ## Usage Having installed the plugin, visit `/-/configure-fts` on your Datasette instance to configure FTS for tables on attached writable databases. Any time you have permission to configure FTS for a table a menu item will appear in the table actions menu on the table page. By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page. The `configure-fts` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface. ","

datasette-configure-fts

Datasette plugin for enabling full-text search against selected table columns

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-configure-fts

Usage

Having installed the plugin, visit /-/configure-fts on your Datasette instance to configure FTS for tables on attached writable databases.

Any time you have permission to configure FTS for a table a menu item will appear in the table actions menu on the table page.

By default only the root actor can access the page - so you'll need to run Datasette with the --root option and click on the link shown in the terminal to sign in and access the page.

The configure-fts permission governs access. You can use permission plugins such as datasette-permissions-sql to grant additional access to the write interface.

",,,,,, 245670670,MDEwOlJlcG9zaXRvcnkyNDU2NzA2NzA=,fec-to-sqlite,simonw/fec-to-sqlite,0,9599,https://github.com/simonw/fec-to-sqlite,Save FEC campaign finance data to a SQLite database,0,2020-03-07T16:52:49Z,2020-12-19T05:09:05Z,2020-03-07T18:21:48Z,,16,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""sqlite"", ""fec"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,1,8,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# fec-to-sqlite [![PyPI](https://img.shields.io/pypi/v/fec-to-sqlite.svg)](https://pypi.org/project/fec-to-sqlite/) [![CircleCI](https://circleci.com/gh/simonw/fec-to-sqlite.svg?style=svg)](https://circleci.com/gh/simonw/fec-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/fec-to-sqlite/blob/master/LICENSE) Create a SQLite database using FEC campaign contributions data. This tool builds on [fecfile](https://github.com/esonderegger/) by Evan Sonderegger. ## How to install $ pip install fec-to-sqlite ## Usage $ fec-to-sqlite filings filings.db 1146148 This fetches the filing with ID `1146148` and stores it in tables in a SQLite database called `filings.db`. It will create any tables it needs. You can pass more than one filing ID, separated by spaces. ","

fec-to-sqlite

Create a SQLite database using FEC campaign contributions data.

This tool builds on fecfile by Evan Sonderegger.

How to install

$ pip install fec-to-sqlite

Usage

$ fec-to-sqlite filings filings.db 1146148

This fetches the filing with ID 1146148 and stores it in tables in a SQLite database called filings.db. It will create any tables it needs.

You can pass more than one filing ID, separated by spaces.

",,,,,, 246108561,MDEwOlJlcG9zaXRvcnkyNDYxMDg1NjE=,datasette-column-inspect,simonw/datasette-column-inspect,0,9599,https://github.com/simonw/datasette-column-inspect,Experimental plugin that adds a column inspector,0,2020-03-09T18:11:00Z,2020-12-09T21:46:10Z,2020-12-09T21:47:38Z,,15,1,1,HTML,1,1,1,1,0,0,0,0,3,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,3,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-column-inspect [![PyPI](https://img.shields.io/pypi/v/datasette-column-inspect.svg)](https://pypi.org/project/datasette-column-inspect/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-column-inspect?include_prereleases&label=changelog)](https://github.com/simonw/datasette-column-inspect/releases) [![Tests](https://github.com/simonw/datasette-column-inspect/workflows/Test/badge.svg)](https://github.com/simonw/datasette-column-inspect/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-column-inspect/blob/main/LICENSE) Highly experimental Datasette plugin for inspecting columns. ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-column-inspect ## Usage This plugin adds an icon to each column on the table page which opens an inspection side panel. ","

datasette-column-inspect

Highly experimental Datasette plugin for inspecting columns.

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-column-inspect

Usage

This plugin adds an icon to each column on the table page which opens an inspection side panel.

",,,,,, 248999994,MDEwOlJlcG9zaXRvcnkyNDg5OTk5OTQ=,datasette-show-errors,simonw/datasette-show-errors,0,9599,https://github.com/simonw/datasette-show-errors,Datasette plugin for displaying error tracebacks,0,2020-03-21T15:06:04Z,2020-09-24T00:17:29Z,2020-09-01T00:32:23Z,,7,1,1,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""asgi"", ""datasette"", ""starlette"", ""datasette-plugin"", ""datasette-io""]",0,1,1,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,0,"# datasette-show-errors [![PyPI](https://img.shields.io/pypi/v/datasette-show-errors.svg)](https://pypi.org/project/datasette-show-errors/) [![CircleCI](https://circleci.com/gh/simonw/datasette-show-errors.svg?style=svg)](https://circleci.com/gh/simonw/datasette-show-errors) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-show-errors/blob/master/LICENSE) Datasette plugin for displaying error tracebacks. **This plugin does not work with current versions of Datasette.** See [issue #2](https://github.com/simonw/datasette-show-errors/issues/2). ## Installation pip install datasette-show-errors ## Usage Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page. Be careful not to use this in a context that might expose sensitive information. ","

datasette-show-errors

Datasette plugin for displaying error tracebacks.

This plugin does not work with current versions of Datasette. See issue #2.

Installation

pip install datasette-show-errors

Usage

Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page.

Be careful not to use this in a context that might expose sensitive information.

",,,,,, 255460347,MDEwOlJlcG9zaXRvcnkyNTU0NjAzNDc=,datasette-clone,simonw/datasette-clone,0,9599,https://github.com/simonw/datasette-clone,Create a local copy of database files from a Datasette instance,0,2020-04-13T23:05:41Z,2021-06-08T15:33:21Z,2021-02-22T19:32:36Z,,20,2,2,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-tool""]",0,0,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-clone [![PyPI](https://img.shields.io/pypi/v/datasette-clone.svg)](https://pypi.org/project/datasette-clone/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-clone?include_prereleases&label=changelog)](https://github.com/simonw/datasette-clone/releases) [![Tests](https://github.com/simonw/datasette-clone/workflows/Test/badge.svg)](https://github.com/simonw/datasette-clone/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-clone/blob/main/LICENSE) Create a local copy of database files from a Datasette instance. See [datasette-clone](https://simonwillison.net/2020/Apr/14/datasette-clone/) on my blog for background on this project. ## How to install $ pip install datasette-clone ## Usage This only works against Datasette instances running immutable databases (with the `-i` option). Databases published using the `datasette publish` command should be compatible with this tool. To download copies of all `.db` files from an instance, run: datasette-clone https://latest.datasette.io You can provide an optional second argument to specify a directory: datasette-clone https://latest.datasette.io /tmp/here-please The command stores its own copy of a `databases.json` manifest and uses it to only download databases that have changed the next time you run the command. It also stores a copy of the instance's `metadata.json` to ensure you have a copy of any source and licensing information for the downloaded databases. If your instance is protected by an API token, you can use `--token` to provide it: datasette-clone https://latest.datasette.io --token=xyz For verbose output showing what the tool is doing, use `-v`. ","

datasette-clone

Create a local copy of database files from a Datasette instance.

See datasette-clone on my blog for background on this project.

How to install

$ pip install datasette-clone

Usage

This only works against Datasette instances running immutable databases (with the -i option). Databases published using the datasette publish command should be compatible with this tool.

To download copies of all .db files from an instance, run:

datasette-clone https://latest.datasette.io

You can provide an optional second argument to specify a directory:

datasette-clone https://latest.datasette.io /tmp/here-please

The command stores its own copy of a databases.json manifest and uses it to only download databases that have changed the next time you run the command.

It also stores a copy of the instance's metadata.json to ensure you have a copy of any source and licensing information for the downloaded databases.

If your instance is protected by an API token, you can use --token to provide it:

datasette-clone https://latest.datasette.io --token=xyz

For verbose output showing what the tool is doing, use -v.

",,,,,, 261634807,MDEwOlJlcG9zaXRvcnkyNjE2MzQ4MDc=,datasette-media,simonw/datasette-media,0,9599,https://github.com/simonw/datasette-media,Datasette plugin for serving media based on a SQL query,0,2020-05-06T02:42:57Z,2021-05-03T05:04:39Z,2020-07-30T23:39:29Z,,70,11,11,Python,1,1,1,1,0,0,0,0,8,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,8,11,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-media [![PyPI](https://img.shields.io/pypi/v/datasette-media.svg)](https://pypi.org/project/datasette-media/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-media?include_prereleases&label=changelog)](https://github.com/simonw/datasette-media/releases) [![CircleCI](https://circleci.com/gh/simonw/datasette-media.svg?style=svg)](https://circleci.com/gh/simonw/datasette-media) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-media/blob/master/LICENSE) Datasette plugin for serving media based on a SQL query. Use this when you have a database table containing references to files on disk - or binary content stored in BLOB columns - that you would like to be able to serve to your users. ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-media ### HEIC image support Modern iPhones save their photos using the [HEIC image format](https://en.wikipedia.org/wiki/High_Efficiency_Image_File_Format). Processing these images requires an additional dependency, [pyheif](https://pypi.org/project/pyheif/). You can include this dependency by running: $ pip install datasette-media[heif] ## Usage You can use this plugin to configure Datasette to serve static media based on SQL queries to an underlying database table. Media will be served from URLs that start with `/-/media/`. The full URL to each media asset will look like this: /-/media/type-of-media/media-key `type-of-media` will correspond to a configured SQL query, and might be something like `photo`. `media-key` will be an identifier that is used as part of the underlying SQL query to find which file should be served. ### Serving static files from disk The following ``metadata.json`` configuration will cause this plugin to serve files from disk, based on queries to a database table called `apple_photos`. ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select filepath from apple_photos where uuid=:key"" } } } } ``` A request to `/-/media/photo/CF972D33-5324-44F2-8DAE-22CB3182CD31` will execute the following SQL query: ```sql select filepath from apple_photos where uuid=:key ``` The value from the URL - in this case `CF972D33-5324-44F2-8DAE-22CB3182CD31` - will be passed as the `:key` parameter to the query. The query returns a `filepath` value that has been read from the table. The plugin will then read that file from disk and serve it in response to the request. SQL queries default to running against the first connected database. You can specify a different database to execute the query against using `""database"": ""name_of_db""`. To execute against `photos.db`, use this: ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select filepath from apple_photos where uuid=:key"", ""database"": ""photos"" } } } } ``` See [dogsheep-photos](https://github.com/dogsheep/dogsheep-photos) for an example of an application that can benefit from this plugin. ### Serving binary content from BLOB columns If your SQL query returns a `content` column, this will be served directly to the user: ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select thumbnail as content from photos where uuid=:key"", ""database"": ""thumbs"" } } } } ``` You can also return a `content_type` column which will be used as the `Content-Type` header served to the user: ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select body as content, 'text/html;charset=utf-8' as content_type from documents where id=:key"", ""database"": ""documents"" } } } } ``` If you do not specify a `content_type` the default of `application/octet-stream` will be used. ### Serving content proxied from a URL To serve content that is itself fetched from elsewhere, return a `content_url` column. This can be particularly useful when combined with the ability to resize images (described in the next section). ```json { ""plugins"": { ""datasette-media"": { ""photos"": { ""sql"": ""select photo_url as content_url from photos where id=:key"", ""database"": ""photos"", ""enable_transform"": true } } } } ``` Now you can access resized versions of images from that URL like so: /-/media/photos/13?w=200 ### Setting a download file name The `content_filename` column can be returned to force browsers to download the content using a specific file name. ```json { ""plugins"": { ""datasette-media"": { ""hello"": { ""sql"": ""select 'Hello ' || :key as content, 'hello.txt' as content_filename"" } } } } ``` Visiting `/-/media/hello/Groot` will cause your browser to download a file called `hello.txt` containing the text `Hello Groot`. ### Resizing or transforming images Your SQL query can specify that an image should be resized and/or converted to another format by returning additional columns. All three are optional. * `resize_width` - the width to resize the image to * `resize_width` - the height to resize the image to * `output_format` - the output format to use (e.g. `jpeg` or `png`) - any output format [supported by Pillow](https://pillow.readthedocs.io/en/stable/handbook/image-file-formats.html) is allowed here. If you specify one but not the other of `resize_width` or `resize_height` the unspecified one will be calculated automatically to maintain the aspect ratio of the image. Here's an example configuration that will resize all images to be JPEGs that are 200 pixels in height: ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select filepath, 200 as resize_height, 'jpeg' as output_format from apple_photos where uuid=:key"", ""database"": ""photos"" } } } } ``` If you enable the `enable_transform` configuration option you can instead specify transform parameters at runtime using querystring parameters. For example: - `/-/media/photo/CF972D33?w=200` to resize to a fixed width - `/-/media/photo/CF972D33?h=200` to resize to a fixed height - `/-/media/photo/CF972D33?format=jpeg` to convert to JPEG That option is added like so: ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select filepath from apple_photos where uuid=:key"", ""database"": ""photos"", ""enable_transform"": true } } } } ``` The maximum allowed height or width is 4000 pixels. You can change this limit using the `""max_width_height""` option: ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select filepath from apple_photos where uuid=:key"", ""database"": ""photos"", ""enable_transform"": true, ""max_width_height"": 1000 } } } } ``` ## Configuration In addition to the different named content types, the following special plugin configuration setting is available: - `transform_threads` - number of threads to use for running transformations (e.g. resizing). Defaults to 4. This can be used like this: ```json { ""plugins"": { ""datasette-media"": { ""photo"": { ""sql"": ""select filepath from apple_photos where uuid=:key"", ""database"": ""photos"" }, ""transform_threads"": 8 } } } ``` ","

datasette-media

Datasette plugin for serving media based on a SQL query.

Use this when you have a database table containing references to files on disk - or binary content stored in BLOB columns - that you would like to be able to serve to your users.

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-media

HEIC image support

Modern iPhones save their photos using the HEIC image format. Processing these images requires an additional dependency, pyheif. You can include this dependency by running:

$ pip install datasette-media[heif]

Usage

You can use this plugin to configure Datasette to serve static media based on SQL queries to an underlying database table.

Media will be served from URLs that start with /-/media/. The full URL to each media asset will look like this:

/-/media/type-of-media/media-key

type-of-media will correspond to a configured SQL query, and might be something like photo. media-key will be an identifier that is used as part of the underlying SQL query to find which file should be served.

Serving static files from disk

The following metadata.json configuration will cause this plugin to serve files from disk, based on queries to a database table called apple_photos.

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select filepath from apple_photos where uuid=:key""
            }
        }
    }
}

A request to /-/media/photo/CF972D33-5324-44F2-8DAE-22CB3182CD31 will execute the following SQL query:

select filepath from apple_photos where uuid=:key

The value from the URL - in this case CF972D33-5324-44F2-8DAE-22CB3182CD31 - will be passed as the :key parameter to the query.

The query returns a filepath value that has been read from the table. The plugin will then read that file from disk and serve it in response to the request.

SQL queries default to running against the first connected database. You can specify a different database to execute the query against using ""database"": ""name_of_db"". To execute against photos.db, use this:

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select filepath from apple_photos where uuid=:key"",
                ""database"": ""photos""
            }
        }
    }
}

See dogsheep-photos for an example of an application that can benefit from this plugin.

Serving binary content from BLOB columns

If your SQL query returns a content column, this will be served directly to the user:

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select thumbnail as content from photos where uuid=:key"",
                ""database"": ""thumbs""
            }
        }
    }
}

You can also return a content_type column which will be used as the Content-Type header served to the user:

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select body as content, 'text/html;charset=utf-8' as content_type from documents where id=:key"",
                ""database"": ""documents""
            }
        }
    }
}

If you do not specify a content_type the default of application/octet-stream will be used.

Serving content proxied from a URL

To serve content that is itself fetched from elsewhere, return a content_url column. This can be particularly useful when combined with the ability to resize images (described in the next section).

{
    ""plugins"": {
        ""datasette-media"": {
            ""photos"": {
                ""sql"": ""select photo_url as content_url from photos where id=:key"",
                ""database"": ""photos"",
                ""enable_transform"": true
            }
        }
    }
}

Now you can access resized versions of images from that URL like so:

/-/media/photos/13?w=200

Setting a download file name

The content_filename column can be returned to force browsers to download the content using a specific file name.

{
    ""plugins"": {
        ""datasette-media"": {
            ""hello"": {
                ""sql"": ""select 'Hello ' || :key as content, 'hello.txt' as content_filename""
            }
        }
    }
}

Visiting /-/media/hello/Groot will cause your browser to download a file called hello.txt containing the text Hello Groot.

Resizing or transforming images

Your SQL query can specify that an image should be resized and/or converted to another format by returning additional columns. All three are optional.

  • resize_width - the width to resize the image to
  • resize_width - the height to resize the image to
  • output_format - the output format to use (e.g. jpeg or png) - any output format supported by Pillow is allowed here.

If you specify one but not the other of resize_width or resize_height the unspecified one will be calculated automatically to maintain the aspect ratio of the image.

Here's an example configuration that will resize all images to be JPEGs that are 200 pixels in height:

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select filepath, 200 as resize_height, 'jpeg' as output_format from apple_photos where uuid=:key"",
                ""database"": ""photos""
            }
        }
    }
}

If you enable the enable_transform configuration option you can instead specify transform parameters at runtime using querystring parameters. For example:

  • /-/media/photo/CF972D33?w=200 to resize to a fixed width
  • /-/media/photo/CF972D33?h=200 to resize to a fixed height
  • /-/media/photo/CF972D33?format=jpeg to convert to JPEG

That option is added like so:

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select filepath from apple_photos where uuid=:key"",
                ""database"": ""photos"",
                ""enable_transform"": true
            }
        }
    }
}

The maximum allowed height or width is 4000 pixels. You can change this limit using the ""max_width_height"" option:

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select filepath from apple_photos where uuid=:key"",
                ""database"": ""photos"",
                ""enable_transform"": true,
                ""max_width_height"": 1000
            }
        }
    }
}

Configuration

In addition to the different named content types, the following special plugin configuration setting is available:

  • transform_threads - number of threads to use for running transformations (e.g. resizing). Defaults to 4.

This can be used like this:

{
    ""plugins"": {
        ""datasette-media"": {
            ""photo"": {
                ""sql"": ""select filepath from apple_photos where uuid=:key"",
                ""database"": ""photos""
            },
            ""transform_threads"": 8
        }
    }
}
",,,,,, 271408895,MDEwOlJlcG9zaXRvcnkyNzE0MDg4OTU=,datasette-permissions-sql,simonw/datasette-permissions-sql,0,9599,https://github.com/simonw/datasette-permissions-sql,Datasette plugin for configuring permission checks using SQL queries,0,2020-06-10T23:48:13Z,2020-06-12T07:06:12Z,2020-06-12T07:06:15Z,,25,0,0,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,0,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-permissions-sql [![PyPI](https://img.shields.io/pypi/v/datasette-permissions-sql.svg)](https://pypi.org/project/datasette-permissions-sql/) [![CircleCI](https://circleci.com/gh/simonw/datasette-permissions-sql.svg?style=svg)](https://circleci.com/gh/simonw/datasette-permissions-sql) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-permissions-sql/blob/master/LICENSE) Datasette plugin for configuring permission checks using SQL queries ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-permissions-sql ## Usage First, read up on how Datasette's [authentication and permissions system](https://datasette.readthedocs.io/en/latest/authentication.html) works. This plugin lets you define rules containing SQL queries that are executed to see if the currently authenticated actor has permission to perform certain actions. Consider a canned query which authenticated users should only be able to execute if a row in the `users` table says that they are a member of staff. That `users` table in the `mydatabase.db` database could look like this: | id | username | is_staff | |--|--------|--------| | 1 | cleopaws | 0 | | 2 | simon | 1 | Authenticated users have an `actor` that looks like this: ```json { ""id"": 2, ""username"": ""simon"" } ``` To configure the canned query to only be executable by staff users, add the following to your `metadata.json`: ```json { ""plugins"": { ""datasette-permissions-sql"": [ { ""action"": ""view-query"", ""resource"": [""mydatabase"", ""promote_to_staff""], ""sql"": ""SELECT * FROM users WHERE is_staff = 1 AND id = :actor_id"" } ] }, ""databases"": { ""mydatabase"": { ""queries"": { ""promote_to_staff"": { ""sql"": ""UPDATE users SET is is_staff=1 WHERE id=:id"", ""write"": true } } } } } ``` The `""datasette-permissions-sql""` key is a list of rules. Each of those rules has the following shape: ```json { ""action"": ""name-of-action"", ""resource"": [""resource identifier to run this on""], ""sql"": ""SQL query to execute"", ""database"": ""mydatabase"" } ``` Both `""action""` and `""resource""` are optional. If present, the SQL query will only be executed on permission checks that match the action and, if present, the resource indicators. `""database""` is also optional: it specifies the named database that the query should be executed against. If it is not present the first connected database will be used. The Datasette documentation includes a [list of built-in permissions](https://datasette.readthedocs.io/en/stable/authentication.html#built-in-permissions) that you might want to use here. ### The SQL query If the SQL query returns any rows the action will be allowed. If it returns no rows, the plugin hook will return `False` and deny access to that action. The SQL query is called with a number of named parameters. You can use any of these as part of the query. The list of parameters is as follows: * `action` - the action, e.g. `""view-database""` * `resource_1` - the first component of the resource, if one was passed * `resource_2` - the second component of the resource, if available * `actor_*` - a parameter for every key on the actor. Usually `actor_id` is present. If any rows are returned, the permission check passes. If no rows are returned the check fails. Another example table, this time granting explicit access to individual tables. Consider a table called `table_access` that looks like this: | user_id | database | table | | - | - | - | | 1 | mydb | dogs | | 2 | mydb | dogs | | 1 | mydb | cats | The following SQL query would grant access to the `dogs` ttable in the `mydb.db` database to users 1 and 2 - but would forbid access for user 2 to the `cats` table: ```sql SELECT * FROM table_access WHERE user_id = :actor_id AND ""database"" = :resource_1 AND ""table"" = :resource_2 ``` In a `metadata.yaml` configuration file that would look like this: ```yaml databases: mydb: allow_sql: {} plugins: datasette-permissions-sql: - action: view-table sql: |- SELECT * FROM table_access WHERE user_id = :actor_id AND ""database"" = :resource_1 AND ""table"" = :resource_2 ``` We're using `allow_sql: {}` here to disable arbitrary SQL queries. This prevents users from running `select * from cats` directly to work around the permissions limits. ### Fallback mode The default behaviour of this plugin is to take full control of specified permissions. The SQL query will directly control if the user is allowed or denied access to the permission. This means that the default policy for each permission (which in Datasette core is ""allow"" for `view-database` and friends) will be ignored. It also means that any other `permission_allowed` plugins will not get their turn once this plugin has executed. You can change this on a per-rule basis using ``""fallback"": true``: ```json { ""action"": ""view-table"", ""resource"": [""mydatabase"", ""mytable""], ""sql"": ""select * from admins where user_id = :actor_id"", ""fallback"": true } ``` When running in fallback mode, a query result returning no rows will cause the plugin hook to return ``None`` - which means ""I have no opinion on this permission, fall back to other plugins or the default"". In this mode you can still return `False` (for ""deny access"") by returning a single row with a single value of `-1`. For example: ```json { ""action"": ""view-table"", ""resource"": [""mydatabase"", ""mytable""], ""sql"": ""select -1 from banned where user_id = :actor_id"", ""fallback"": true } ``` ","

datasette-permissions-sql

Datasette plugin for configuring permission checks using SQL queries

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-permissions-sql

Usage

First, read up on how Datasette's authentication and permissions system works.

This plugin lets you define rules containing SQL queries that are executed to see if the currently authenticated actor has permission to perform certain actions.

Consider a canned query which authenticated users should only be able to execute if a row in the users table says that they are a member of staff.

That users table in the mydatabase.db database could look like this:

id username is_staff
1 cleopaws 0
2 simon 1

Authenticated users have an actor that looks like this:

{
    ""id"": 2,
    ""username"": ""simon""
}

To configure the canned query to only be executable by staff users, add the following to your metadata.json:

{
    ""plugins"": {
        ""datasette-permissions-sql"": [
            {
                ""action"": ""view-query"",
                ""resource"": [""mydatabase"", ""promote_to_staff""],
                ""sql"": ""SELECT * FROM users WHERE is_staff = 1 AND id = :actor_id""
            }
        ]
    },
    ""databases"": {
        ""mydatabase"": {
            ""queries"": {
                ""promote_to_staff"": {
                    ""sql"": ""UPDATE users SET is is_staff=1 WHERE id=:id"",
                    ""write"": true
                }
            }
        }
    }
}

The ""datasette-permissions-sql"" key is a list of rules. Each of those rules has the following shape:

{
    ""action"": ""name-of-action"",
    ""resource"": [""resource identifier to run this on""],
    ""sql"": ""SQL query to execute"",
    ""database"": ""mydatabase""
}

Both ""action"" and ""resource"" are optional. If present, the SQL query will only be executed on permission checks that match the action and, if present, the resource indicators.

""database"" is also optional: it specifies the named database that the query should be executed against. If it is not present the first connected database will be used.

The Datasette documentation includes a list of built-in permissions that you might want to use here.

The SQL query

If the SQL query returns any rows the action will be allowed. If it returns no rows, the plugin hook will return False and deny access to that action.

The SQL query is called with a number of named parameters. You can use any of these as part of the query.

The list of parameters is as follows:

  • action - the action, e.g. ""view-database""
  • resource_1 - the first component of the resource, if one was passed
  • resource_2 - the second component of the resource, if available
  • actor_* - a parameter for every key on the actor. Usually actor_id is present.

If any rows are returned, the permission check passes. If no rows are returned the check fails.

Another example table, this time granting explicit access to individual tables. Consider a table called table_access that looks like this:

user_id database table
1 mydb dogs
2 mydb dogs
1 mydb cats

The following SQL query would grant access to the dogs ttable in the mydb.db database to users 1 and 2 - but would forbid access for user 2 to the cats table:

SELECT
    *
FROM
    table_access
WHERE
    user_id = :actor_id
    AND ""database"" = :resource_1
    AND ""table"" = :resource_2

In a metadata.yaml configuration file that would look like this:

databases:
  mydb:
    allow_sql: {}
plugins:
  datasette-permissions-sql:
  - action: view-table
    sql: |-
      SELECT
        *
      FROM
        table_access
      WHERE
        user_id = :actor_id
        AND ""database"" = :resource_1
        AND ""table"" = :resource_2

We're using allow_sql: {} here to disable arbitrary SQL queries. This prevents users from running select * from cats directly to work around the permissions limits.

Fallback mode

The default behaviour of this plugin is to take full control of specified permissions. The SQL query will directly control if the user is allowed or denied access to the permission.

This means that the default policy for each permission (which in Datasette core is ""allow"" for view-database and friends) will be ignored. It also means that any other permission_allowed plugins will not get their turn once this plugin has executed.

You can change this on a per-rule basis using ""fallback"": true:

{
    ""action"": ""view-table"",
    ""resource"": [""mydatabase"", ""mytable""],
    ""sql"": ""select * from admins where user_id = :actor_id"",
    ""fallback"": true
}

When running in fallback mode, a query result returning no rows will cause the plugin hook to return None - which means ""I have no opinion on this permission, fall back to other plugins or the default"".

In this mode you can still return False (for ""deny access"") by returning a single row with a single value of -1. For example:

{
    ""action"": ""view-table"",
    ""resource"": [""mydatabase"", ""mytable""],
    ""sql"": ""select -1 from banned where user_id = :actor_id"",
    ""fallback"": true
}
",,,,,, 273609879,MDEwOlJlcG9zaXRvcnkyNzM2MDk4Nzk=,datasette-saved-queries,simonw/datasette-saved-queries,0,9599,https://github.com/simonw/datasette-saved-queries,Datasette plugin that lets users save and execute queries,0,2020-06-20T00:20:42Z,2020-09-24T05:08:37Z,2020-08-15T23:38:46Z,,12,2,2,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-saved-queries [![PyPI](https://img.shields.io/pypi/v/datasette-saved-queries.svg)](https://pypi.org/project/datasette-saved-queries/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-saved-queries?label=changelog)](https://github.com/simonw/datasette-saved-queries/releases) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-saved-queries/blob/master/LICENSE) Datasette plugin that lets users save and execute queries ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-saved-queries ## Usage When the plugin is installed Datasette will automatically create a `saved_queries` table in the first connected database when it starts up. It also creates a `save_query` writable canned query which you can use to save new queries. Queries that you save will be added to the query list on the database page. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-saved-queries python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-saved-queries

Datasette plugin that lets users save and execute queries

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-saved-queries

Usage

When the plugin is installed Datasette will automatically create a saved_queries table in the first connected database when it starts up.

It also creates a save_query writable canned query which you can use to save new queries.

Queries that you save will be added to the query list on the database page.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-saved-queries
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 275615947,MDEwOlJlcG9zaXRvcnkyNzU2MTU5NDc=,datasette-glitch,simonw/datasette-glitch,0,9599,https://github.com/simonw/datasette-glitch,Utilities to help run Datasette on Glitch,0,2020-06-28T15:41:25Z,2020-07-01T22:48:35Z,2020-07-01T22:49:22Z,,3,1,1,Python,1,1,1,1,0,0,0,0,0,,"[""glitch"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-glitch [![PyPI](https://img.shields.io/pypi/v/datasette-glitch.svg)](https://pypi.org/project/datasette-glitch/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-glitch?label=changelog)](https://github.com/simonw/datasette-glitch/releases) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-glitch/blob/master/LICENSE) Utilities to help run Datasette on Glitch ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-glitch ## Usage This plugin outputs a special link which will sign you into Datasette as the root user. Click Tools -> Logs in the Glitch editor interface after your app starts to see the link. ","

datasette-glitch

Utilities to help run Datasette on Glitch

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-glitch

Usage

This plugin outputs a special link which will sign you into Datasette as the root user.

Click Tools -> Logs in the Glitch editor interface after your app starts to see the link.

",,,,,, 291359358,MDEwOlJlcG9zaXRvcnkyOTEzNTkzNTg=,datasette-yaml,simonw/datasette-yaml,0,9599,https://github.com/simonw/datasette-yaml,Export Datasette records as YAML,0,2020-08-29T22:32:15Z,2020-12-28T03:20:36Z,2021-05-13T08:59:53Z,,7,2,2,Python,1,1,1,1,0,1,0,0,1,,"[""yaml"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",1,1,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,1,"# datasette-yaml [![PyPI](https://img.shields.io/pypi/v/datasette-yaml.svg)](https://pypi.org/project/datasette-yaml/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-yaml?include_prereleases&label=changelog)](https://github.com/simonw/datasette-yaml/releases) [![Tests](https://github.com/simonw/datasette-yaml/workflows/Test/badge.svg)](https://github.com/simonw/datasette-yaml/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-yaml/blob/main/LICENSE) Export Datasette records as YAML ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-yaml ## Usage Having installed this plugin, every table and query will gain a new `.yaml` export link. You can also construct these URLs directly: `/dbname/tablename.yaml` ## Demo The plugin is running on [covid-19.datasettes.com](https://covid-19.datasettes.co/) - for example [/covid/latest_ny_times_counties_with_populations.yaml](https://covid-19.datasettes.com/covid/latest_ny_times_counties_with_populations.yaml) ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-yaml python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-yaml

Export Datasette records as YAML

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-yaml

Usage

Having installed this plugin, every table and query will gain a new .yaml export link.

You can also construct these URLs directly: /dbname/tablename.yaml

Demo

The plugin is running on covid-19.datasettes.com - for example /covid/latest_ny_times_counties_with_populations.yaml

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-yaml
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 293164447,MDEwOlJlcG9zaXRvcnkyOTMxNjQ0NDc=,datasette-backup,simonw/datasette-backup,0,9599,https://github.com/simonw/datasette-backup,Plugin adding backup options to Datasette,0,2020-09-05T22:33:29Z,2020-09-24T00:16:59Z,2020-09-07T02:27:30Z,,6,1,1,Python,1,1,1,1,0,0,0,0,3,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,3,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-backup [![PyPI](https://img.shields.io/pypi/v/datasette-backup.svg)](https://pypi.org/project/datasette-backup/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-backup?include_prereleases&label=changelog)](https://github.com/simonw/datasette-backup/releases) [![Tests](https://github.com/simonw/datasette-backup/workflows/Test/badge.svg)](https://github.com/simonw/datasette-backup/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-backup/blob/main/LICENSE) Plugin adding backup options to Datasette ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-backup ## Usage Once installed, you can download a SQL backup of any of your databases from: /-/backup/dbname.sql ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-backup python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-backup

Plugin adding backup options to Datasette

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-backup

Usage

Once installed, you can download a SQL backup of any of your databases from:

/-/backup/dbname.sql

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-backup
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 331720824,MDEwOlJlcG9zaXRvcnkzMzE3MjA4MjQ=,datasette-leaflet,simonw/datasette-leaflet,0,9599,https://github.com/simonw/datasette-leaflet,Datasette plugin adding the Leaflet JavaScript library,0,2021-01-21T18:41:19Z,2021-04-20T16:27:35Z,2021-02-01T22:20:28Z,,124,3,3,JavaScript,1,1,1,1,0,0,0,0,2,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,2,3,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-leaflet [![PyPI](https://img.shields.io/pypi/v/datasette-leaflet.svg)](https://pypi.org/project/datasette-leaflet/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-leaflet?include_prereleases&label=changelog)](https://github.com/simonw/datasette-leaflet/releases) [![Tests](https://github.com/simonw/datasette-leaflet/workflows/Test/badge.svg)](https://github.com/simonw/datasette-leaflet/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-leaflet/blob/main/LICENSE) Datasette plugin adding the [Leaflet](https://leafletjs.com/) JavaScript library. A growing number of Datasette plugins depend on the Leaflet JavaScript mapping library. They each have their own way of loading Leaflet, which could result in loading it multiple times (with multiple versions) if more than one plugin is installed. This library is intended to solve this problem, by providing a single plugin they can all depend on that loads Leaflet in a reusable way. Plugins that use this: - [datasette-leaflet-freedraw](https://datasette.io/plugins/datasette-leaflet-freedraw) - [datasette-leaflet-geojson](https://datasette.io/plugins/datasette-leaflet-geojson) - [datasette-cluster-map](https://datasette.io/plugins/datasette-cluster-map) ## Installation You can install this plugin like so: datasette install datasette-leaflet Usually this plugin will be a dependency of other plugins, so it should be installed automatically when you install them. ## Usage The plugin makes `leaflet.js` and `leaflet.css` available as static files. It provides two custom template variables with the URLs of those two files. - `{{ datasette_leaflet_url }}` is the URL to the JavaScript - `{{ datasette_leaflet_css_url }}` is the URL to the CSS These URLs are also made available as global JavaScript constants: - `datasette.leaflet.JAVASCRIPT_URL` - `datasette.leaflet.CSS_URL` The JavaScript is packaed as a [JavaScript module](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules). You can dynamically import the JavaScript from a custom template like this: ```html+jinja ``` You can load the CSS like this: ```html+jinja ``` Or dynamically like this: ```html+jinja ``` Here's a full example that loads the JavaScript and CSS and renders a map: ```html+jinja ``` ","

datasette-leaflet

Datasette plugin adding the Leaflet JavaScript library.

A growing number of Datasette plugins depend on the Leaflet JavaScript mapping library. They each have their own way of loading Leaflet, which could result in loading it multiple times (with multiple versions) if more than one plugin is installed.

This library is intended to solve this problem, by providing a single plugin they can all depend on that loads Leaflet in a reusable way.

Plugins that use this:

Installation

You can install this plugin like so:

datasette install datasette-leaflet

Usually this plugin will be a dependency of other plugins, so it should be installed automatically when you install them.

Usage

The plugin makes leaflet.js and leaflet.css available as static files. It provides two custom template variables with the URLs of those two files.

  • {{ datasette_leaflet_url }} is the URL to the JavaScript
  • {{ datasette_leaflet_css_url }} is the URL to the CSS

These URLs are also made available as global JavaScript constants:

  • datasette.leaflet.JAVASCRIPT_URL
  • datasette.leaflet.CSS_URL

The JavaScript is packaed as a JavaScript module. You can dynamically import the JavaScript from a custom template like this:

<script type=""module"">
import('{{ datasette_leaflet_url }}')
  .then((leaflet) => {
    /* Use leaflet here */
  });
</script>

You can load the CSS like this:

<link rel=""stylesheet"" href=""{{ datasette_leaflet_css_url }}"">

Or dynamically like this:

<script>
let link = document.createElement('link');
link.rel = 'stylesheet';
link.href = '{{ datasette_leaflet_css_url }}';
document.head.appendChild(link);
</script>

Here's a full example that loads the JavaScript and CSS and renders a map:

<script type=""module"">
window.DATASETTE_CLUSTER_MAP_TILE_LAYER = ""https://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png"";
window.DATASETTE_CLUSTER_MAP_TILE_LAYER_OPTIONS = {""maxZoom"": 19, ""detectRetina"": true, ""attribution"": ""&copy; <a href=\""https://www.openstreetmap.org/copyright\"">OpenStreetMap</a> contributors""};
let link = document.createElement('link');
link.rel = 'stylesheet';
link.href = '{{ datasette_leaflet_css_url }}';
document.head.appendChild(link);
import('{{ datasette_leaflet_url }}')
  .then((leaflet) => {
    let div = document.createElement('div');
    div.style.height = '400px';
    document.querySelector('.content').appendChild(div);
    let tiles = leaflet.tileLayer(
        window.DATASETTE_CLUSTER_MAP_TILE_LAYER,
        window.DATASETTE_CLUSTER_MAP_TILE_LAYER_OPTIONS
    );
    let map = leaflet.map(div, {
        center: leaflet.latLng(0, 0),
        zoom: 2,
        layers: [tiles]
    });
  });
</script>
",,,,,, 335137108,MDEwOlJlcG9zaXRvcnkzMzUxMzcxMDg=,datasette-basemap,simonw/datasette-basemap,0,9599,https://github.com/simonw/datasette-basemap,A basemap for Datasette and datasette-leaflet,0,2021-02-02T01:49:09Z,2021-02-03T21:56:20Z,2021-02-03T21:56:18Z,https://datasette.io/plugins/datasette-basemap,43,1,1,Python,1,1,1,1,0,0,0,0,0,,"[""openstreetmap"", ""mbtiles"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-basemap [![PyPI](https://img.shields.io/pypi/v/datasette-basemap.svg)](https://pypi.org/project/datasette-basemap/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-basemap?include_prereleases&label=changelog)](https://github.com/simonw/datasette-basemap/releases) [![Tests](https://github.com/simonw/datasette-basemap/workflows/Test/badge.svg)](https://github.com/simonw/datasette-basemap/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-basemap/blob/main/LICENSE) A basemap for Datasette and datasette-leaflet ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-basemap ## Usage This plugin will make a `basemap` database available containing OpenStreetMap tiles for zoom levels 0-6 in the [mbtiles](https://github.com/mapbox/mbtiles-spec) format. It is designed for use with the [datasette-tiles](https://datasette.io/plugins/datasette-tiles) tile server plugin. ## Demo You can preview this map at https://datasette-tiles-demo.datasette.io/-/tiles/basemap and browse the database directly at https://datasette-tiles-demo.datasette.io/basemap ## License The data bundled with this package is © OpenStreetMap contributors, licensed under the [Open Data Commons Open Database License](https://opendatacommons.org/licenses/odbl/). See [this page](https://www.openstreetmap.org/copyright) for more details. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-basemap python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-basemap

A basemap for Datasette and datasette-leaflet

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-basemap

Usage

This plugin will make a basemap database available containing OpenStreetMap tiles for zoom levels 0-6 in the mbtiles format. It is designed for use with the datasette-tiles tile server plugin.

Demo

You can preview this map at https://datasette-tiles-demo.datasette.io/-/tiles/basemap and browse the database directly at https://datasette-tiles-demo.datasette.io/basemap

License

The data bundled with this package is © OpenStreetMap contributors, licensed under the Open Data Commons Open Database License. See this page for more details.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-basemap
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 342126610,MDEwOlJlcG9zaXRvcnkzNDIxMjY2MTA=,datasette-block,simonw/datasette-block,0,9599,https://github.com/simonw/datasette-block,Block all access to specific path prefixes,0,2021-02-25T04:51:08Z,2021-02-25T08:18:28Z,2021-02-25T05:03:45Z,https://datasette.io/plugins/datasette-block,4,1,1,Python,1,1,1,1,0,0,0,0,0,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-block [![PyPI](https://img.shields.io/pypi/v/datasette-block.svg)](https://pypi.org/project/datasette-block/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-block?include_prereleases&label=changelog)](https://github.com/simonw/datasette-block/releases) [![Tests](https://github.com/simonw/datasette-block/workflows/Test/badge.svg)](https://github.com/simonw/datasette-block/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-block/blob/main/LICENSE) Block all access to specific path prefixes ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-block ## Configuration Add the following to `metadata.json` to block specific path prefixes: ```json { ""plugins"": { ""datasette-block"": { ""prefixes"": [""/all/""] } } } ``` This will cause a 403 error to be returned for any path beginning with `/all/`. This blocking happens as an ASGI wrapper around Datasette. ## Why would you need this? You almost always would not. I use it with `datasette-ripgrep` to block access to static assets for unauthenticated users. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-block python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-block

Block all access to specific path prefixes

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-block

Configuration

Add the following to metadata.json to block specific path prefixes:

{
    ""plugins"": {
        ""datasette-block"": {
            ""prefixes"": [""/all/""]
        }
    }
}

This will cause a 403 error to be returned for any path beginning with /all/.

This blocking happens as an ASGI wrapper around Datasette.

Why would you need this?

You almost always would not. I use it with datasette-ripgrep to block access to static assets for unauthenticated users.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-block
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 375546675,MDEwOlJlcG9zaXRvcnkzNzU1NDY2NzU=,datasette-placekey,simonw/datasette-placekey,0,9599,https://github.com/simonw/datasette-placekey,SQL functions for working with placekeys,0,2021-06-10T02:31:27Z,2021-06-10T02:33:22Z,2021-06-10T02:32:42Z,https://datasette.io/plugins/datasette-placekey,3,0,0,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io"", ""placekey""]",0,1,0,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-placekey [![PyPI](https://img.shields.io/pypi/v/datasette-placekey.svg)](https://pypi.org/project/datasette-placekey/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-placekey?include_prereleases&label=changelog)](https://github.com/simonw/datasette-placekey/releases) [![Tests](https://github.com/simonw/datasette-placekey/workflows/Test/badge.svg)](https://github.com/simonw/datasette-placekey/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-placekey/blob/main/LICENSE) SQL functions for working with [placekeys](https://www.placekey.io/). ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-placekey ## Usage The following SQL functions are exposed - [documentation here](https://placekey.github.io/placekey-py/placekey.html#module-placekey.placekey). ```sql select geo_to_placekey(33.0896104,129.7900839), placekey_to_geo('@6nh-nhh-kvf'), placekey_to_geo_latitude('@6nh-nhh-kvf'), placekey_to_geo_longitude('@6nh-nhh-kvf'), placekey_to_h3('@6nh-nhh-kvf'), h3_to_placekey('8a30d94e4c87fff'), placekey_to_geojson('@6nh-nhh-kvf'), placekey_to_wkt('@6nh-nhh-kvf'), placekey_format_is_valid('@6nh-nhh-kvf'); ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-placekey python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-placekey

SQL functions for working with placekeys.

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-placekey

Usage

The following SQL functions are exposed - documentation here.

select
  geo_to_placekey(33.0896104,129.7900839),
  placekey_to_geo('@6nh-nhh-kvf'),
  placekey_to_geo_latitude('@6nh-nhh-kvf'),
  placekey_to_geo_longitude('@6nh-nhh-kvf'),
  placekey_to_h3('@6nh-nhh-kvf'),
  h3_to_placekey('8a30d94e4c87fff'),
  placekey_to_geojson('@6nh-nhh-kvf'),
  placekey_to_wkt('@6nh-nhh-kvf'),
  placekey_format_is_valid('@6nh-nhh-kvf');

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-placekey
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,,