id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions 167730071,MDEwOlJlcG9zaXRvcnkxNjc3MzAwNzE=,datasette-pretty-json,simonw/datasette-pretty-json,0,9599,https://github.com/simonw/datasette-pretty-json,Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays,0,2019-01-26T19:30:43Z,2022-09-24T06:13:11Z,2022-09-28T21:06:31Z,,14,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""json""]",0,1,8,master,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-pretty-json [![PyPI](https://img.shields.io/pypi/v/datasette-pretty-json.svg)](https://pypi.org/project/datasette-pretty-json/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-pretty-json?include_prereleases&label=changelog)](https://github.com/simonw/datasette-pretty-json/releases) [![Tests](https://github.com/simonw/datasette-pretty-json/workflows/Test/badge.svg)](https://github.com/simonw/datasette-pretty-json/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-pretty-json/blob/main/LICENSE) [Datasette](https://github.com/simonw/datasette) plugin that pretty-prints any column values that are valid JSON objects or arrays. You may also be interested in [datasette-json-html](https://github.com/simonw/datasette-json-html). ","

datasette-pretty-json

Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays.

You may also be interested in datasette-json-html.

",1,public,0,,0, 184168864,MDEwOlJlcG9zaXRvcnkxODQxNjg4NjQ=,datasette-render-html,simonw/datasette-render-html,0,9599,https://github.com/simonw/datasette-render-html,Plugin for selectively rendering the HTML is specific columns,0,2019-04-30T01:21:25Z,2020-09-24T04:44:47Z,2021-03-17T03:57:13Z,,8,2,2,Python,1,1,1,1,0,2,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",2,1,2,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# datasette-render-html [![PyPI](https://img.shields.io/pypi/v/datasette-render-html.svg)](https://pypi.org/project/datasette-render-html/) [![CircleCI](https://circleci.com/gh/simonw/datasette-render-html.svg?style=svg)](https://circleci.com/gh/simonw/datasette-render-html) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-render-html/blob/master/LICENSE) This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces. This means you can store HTML in those columns and have it rendered as such on those pages. If you have a database called `docs.db` containing a `glossary` table and you want the `definition` column in that table to be rendered as HTML, you would use a `metadata.json` file that looks like this: { ""databases"": { ""docs"": { ""tables"": { ""glossary"": { ""plugins"": { ""datasette-render-html"": { ""columns"": [""definition""] } } } } } } } ## Security This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an [XSS attack](https://owasp.org/www-community/attacks/xss/). It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML. As such, you should only use this plugin against specific columns in specific tables, as shown in the example above. ","

datasette-render-html

This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces.

This means you can store HTML in those columns and have it rendered as such on those pages.

If you have a database called docs.db containing a glossary table and you want the definition column in that table to be rendered as HTML, you would use a metadata.json file that looks like this:

{
    ""databases"": {
        ""docs"": {
            ""tables"": {
                ""glossary"": {
                    ""plugins"": {
                        ""datasette-render-html"": {
                            ""columns"": [""definition""]
                        }
                    }
                }
            }
        }
    }
}

Security

This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an XSS attack.

It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML.

As such, you should only use this plugin against specific columns in specific tables, as shown in the example above.

",,,,,, 190950781,MDEwOlJlcG9zaXRvcnkxOTA5NTA3ODE=,datasette-bplist,simonw/datasette-bplist,0,9599,https://github.com/simonw/datasette-bplist,Datasette plugin for working with Apple's binary plist format,0,2019-06-09T01:15:01Z,2021-06-07T18:05:00Z,2019-06-09T01:17:19Z,,7,9,9,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""bplist"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,9,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,0,"# datasette-bplist [![PyPI](https://img.shields.io/pypi/v/datasette-bplist.svg)](https://pypi.org/project/datasette-bplist/) [![CircleCI](https://circleci.com/gh/simonw/datasette-bplist.svg?style=svg)](https://circleci.com/gh/simonw/datasette-bplist) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-bplist/blob/master/LICENSE) Datasette plugin for working with Apple's [binary plist](https://en.wikipedia.org/wiki/Property_list) format. This plugin adds two features: a display hook and a SQL function. The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI. The SQL function `bplist_to_json(value)` can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's `json_extract()` function or with the [datasette-jq](https://github.com/simonw/datasette-jq) plugin to further analyze that data as part of a SQL query. Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-bplist ## Trying it out If you use a Mac you already have plenty of SQLite databases that contain binary plist data. One example is the database that powers the Apple Photos app. This database tends to be locked, so you will need to create a copy of the database in order to run queries against it: cp ~/Pictures/Photos\ Library.photoslibrary/database/photos.db /tmp/photos.db The database also makes use of custom SQLite extensions which prevent it from opening in Datasette. You can work around this by exporting the data that you want to experiment with into a new SQLite file. I recommend trying this plugin against the `RKMaster_dataNote` table, which contains plist-encoded EXIF metadata about the photos you have taken. You can export that table into a fresh database like so: sqlite3 /tmp/photos.db "".dump RKMaster_dataNote"" | sqlite3 /tmp/exif.db Now run `datasette /tmp/exif.db` and you can start trying out the plugin. ## Using the bplist_to_json() SQL function Once you have the `exif.db` demo working, you can try the `bplist_to_json()` SQL function. Here's a query that shows the camera lenses you have used the most often to take photos: select json_extract( bplist_to_json(value), ""$.{Exif}.LensModel"" ) as lens, count(*) as n from RKMaster_dataNote group by lens order by n desc; If you have a large number of photos this query can take a long time to execute, so you may need to increase the SQL time limit enforced by Datasette like so: $ datasette /tmp/exif.db \ --config sql_time_limit_ms:10000 Here's another query, showing the time at which you took every photo in your library which is classified as as screenshot: select attachedToId, json_extract( bplist_to_json(value), ""$.{Exif}.DateTimeOriginal"" ) from RKMaster_dataNote where json_extract( bplist_to_json(value), ""$.{Exif}.UserComment"" ) = ""Screenshot"" And if you install the [datasette-cluster-map](https://github.com/simonw/datasette-cluster-map) plugin, this query will show you a map of your most recent 1000 photos: select *, json_extract( bplist_to_json(value), ""$.{GPS}.Latitude"" ) as latitude, -json_extract( bplist_to_json(value), ""$.{GPS}.Longitude"" ) as longitude, json_extract( bplist_to_json(value), ""$.{Exif}.DateTimeOriginal"" ) as datetime from RKMaster_dataNote where latitude is not null order by attachedToId desc ","

datasette-bplist

Datasette plugin for working with Apple's binary plist format.

This plugin adds two features: a display hook and a SQL function.

The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI.

The SQL function bplist_to_json(value) can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's json_extract() function or with the datasette-jq plugin to further analyze that data as part of a SQL query.

Install this plugin in the same environment as Datasette to enable this new functionality:

pip install datasette-bplist

Trying it out

If you use a Mac you already have plenty of SQLite databases that contain binary plist data.

One example is the database that powers the Apple Photos app.

This database tends to be locked, so you will need to create a copy of the database in order to run queries against it:

cp ~/Pictures/Photos\ Library.photoslibrary/database/photos.db /tmp/photos.db

The database also makes use of custom SQLite extensions which prevent it from opening in Datasette.

You can work around this by exporting the data that you want to experiment with into a new SQLite file.

I recommend trying this plugin against the RKMaster_dataNote table, which contains plist-encoded EXIF metadata about the photos you have taken.

You can export that table into a fresh database like so:

sqlite3 /tmp/photos.db "".dump RKMaster_dataNote"" | sqlite3 /tmp/exif.db

Now run datasette /tmp/exif.db and you can start trying out the plugin.

Using the bplist_to_json() SQL function

Once you have the exif.db demo working, you can try the bplist_to_json() SQL function.

Here's a query that shows the camera lenses you have used the most often to take photos:

select
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.LensModel""
    ) as lens,
    count(*) as n
from RKMaster_dataNote
group by lens
order by n desc;

If you have a large number of photos this query can take a long time to execute, so you may need to increase the SQL time limit enforced by Datasette like so:

$ datasette /tmp/exif.db \
    --config sql_time_limit_ms:10000

Here's another query, showing the time at which you took every photo in your library which is classified as as screenshot:

select
    attachedToId,
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.DateTimeOriginal""
    )
from RKMaster_dataNote
where
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.UserComment""
    ) = ""Screenshot""

And if you install the datasette-cluster-map plugin, this query will show you a map of your most recent 1000 photos:

select
    *, 
    json_extract(
        bplist_to_json(value),
        ""$.{GPS}.Latitude""
    ) as latitude,
    -json_extract(
        bplist_to_json(value),
        ""$.{GPS}.Longitude""
    ) as longitude,
    json_extract(
        bplist_to_json(value),
        ""$.{Exif}.DateTimeOriginal""
    ) as datetime
from
    RKMaster_dataNote
where
    latitude is not null
order by
    attachedToId desc
",,,,,, 191022928,MDEwOlJlcG9zaXRvcnkxOTEwMjI5Mjg=,datasette-render-binary,simonw/datasette-render-binary,0,9599,https://github.com/simonw/datasette-render-binary,Datasette plugin for rendering binary data,0,2019-06-09T15:25:52Z,2021-06-02T09:29:20Z,2019-06-13T16:14:31Z,,62,7,7,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,7,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-render-binary [![PyPI](https://img.shields.io/pypi/v/datasette-render-binary.svg)](https://pypi.org/project/datasette-render-binary/) [![CircleCI](https://circleci.com/gh/simonw/datasette-render-binary.svg?style=svg)](https://circleci.com/gh/simonw/datasette-render-binary) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-render-binary/blob/master/LICENSE) Datasette plugin for rendering binary data. Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-render-binary Binary data in cells will now be rendered as a mixture of characters and octets. ![Screenshot](https://raw.githubusercontent.com/simonw/datasette-render-binary/master/example.png) ","

datasette-render-binary

Datasette plugin for rendering binary data.

Install this plugin in the same environment as Datasette to enable this new functionality:

pip install datasette-render-binary

Binary data in cells will now be rendered as a mixture of characters and octets.

",,,,,, 195696804,MDEwOlJlcG9zaXRvcnkxOTU2OTY4MDQ=,datasette-cors,simonw/datasette-cors,0,9599,https://github.com/simonw/datasette-cors,Datasette plugin for configuring CORS headers,0,2019-07-07T21:03:11Z,2021-02-27T00:31:13Z,2019-07-11T04:40:57Z,,11,9,9,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,9,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,3,"# datasette-cors [![PyPI](https://img.shields.io/pypi/v/datasette-cors.svg)](https://pypi.org/project/datasette-cors/) [![CircleCI](https://circleci.com/gh/simonw/datasette-cors.svg?style=svg)](https://circleci.com/gh/simonw/datasette-cors) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-cors/blob/master/LICENSE) Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors You can use this plugin to allow JavaScript running on a whitelisted set of domains to make `fetch()` calls to the JSON API provided by your Datasette instance. ## Installation pip install datasette-cors ## Configuration You need to add some configuration to your Datasette `metadata.json` file for this plugin to take effect. To whitelist specific domains, use this: ```json { ""plugins"": { ""datasette-cors"": { ""hosts"": [""https://www.example.com""] } } } ``` You can also whitelist patterns like this: ```json { ""plugins"": { ""datasette-cors"": { ""host_wildcards"": [""https://*.example.com""] } } } ``` ## Testing it To test this plugin out, run it locally by saving one of the above examples as `metadata.json` and running this: $ datasette --memory -m metadata.json Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following: ```javascript fetch(""http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29"").then(r => r.json()).then(console.log) ``` If the plugin is running correctly, you will see the JSON response output to the console. ","

datasette-cors

Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors

You can use this plugin to allow JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance.

Installation

pip install datasette-cors

Configuration

You need to add some configuration to your Datasette metadata.json file for this plugin to take effect.

To whitelist specific domains, use this:

{
    ""plugins"": {
        ""datasette-cors"": {
            ""hosts"": [""https://www.example.com""]
        }
    }
}

You can also whitelist patterns like this:

{
    ""plugins"": {
        ""datasette-cors"": {
            ""host_wildcards"": [""https://*.example.com""]
        }
    }
}

Testing it

To test this plugin out, run it locally by saving one of the above examples as metadata.json and running this:

$ datasette --memory -m metadata.json

Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following:

fetch(""http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29"").then(r => r.json()).then(console.log)

If the plugin is running correctly, you will see the JSON response output to the console.

",,,,,, 205429375,MDEwOlJlcG9zaXRvcnkyMDU0MjkzNzU=,swarm-to-sqlite,dogsheep/swarm-to-sqlite,0,53015001,https://github.com/dogsheep/swarm-to-sqlite,Create a SQLite database containing your checkin history from Foursquare Swarm,0,2019-08-30T17:37:29Z,2021-02-22T07:58:39Z,2021-01-18T04:36:03Z,,49,37,37,Python,1,1,1,1,0,1,0,0,1,apache-2.0,"[""sqlite"", ""foursquare"", ""swarm"", ""foursquare-api"", ""datasette"", ""dogsheep"", ""datasette-io"", ""datasette-tool""]",1,1,37,main,"{""admin"": false, ""push"": false, ""pull"": false}",,53015001,1,3,"# swarm-to-sqlite [![PyPI](https://img.shields.io/pypi/v/swarm-to-sqlite.svg)](https://pypi.org/project/swarm-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/dogsheep/swarm-to-sqlite?include_prereleases&label=changelog)](https://github.com/dogsheep/swarm-to-sqlite/releases) [![Tests](https://github.com/dogsheep/swarm-to-sqlite/workflows/Test/badge.svg)](https://github.com/dogsheep/swarm-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/dogsheep/swarm-to-sqlite/blob/main/LICENSE) Create a SQLite database containing your checkin history from Foursquare Swarm. ## How to install $ pip install swarm-to-sqlite ## Usage You will need to first obtain a valid OAuth token for your Foursquare account. You can do so using this tool: https://your-foursquare-oauth-token.glitch.me/ Simplest usage is to simply provide the name of the database file you wish to write to. The tool will prompt you to paste in your token, and will then download your checkins and store them in the specified database file. $ swarm-to-sqlite checkins.db Please provide your Foursquare OAuth token: Importing 3699 checkins [#########-----------------------] 27% 00:02:31 You can also pass the token as a command-line option: $ swarm-to-sqlite checkins.db --token=XXX Or as an environment variable: $ export FOURSQUARE_TOKEN=XXX $ swarm-to-sqlite checkins.db To retrieve just checkins within the past X hours, days or weeks, use the `--since=` option. For example, to pull only checkins that happened within the last 10 days use: $ swarm-to-sqlite checkins.db --token=XXX --since=10d Use `2w` for two weeks, `10h` for ten hours, `3d` for three days. In addition to saving the checkins to a database, you can also write them to a JSON file using the `--save` option: $ swarm-to-sqlite checkins.db --save=checkins.json Having done this, you can re-import checkins directly from that file (rather than making API calls to fetch data from Foursquare) like this: $ swarm-to-sqlite checkins.db --load=checkins.json ## Using with Datasette The SQLite database produced by this tool is designed to be browsed using [Datasette](https://datasette.io/). You can install the [datasette-cluster-map](https://datasette.io/plugins/datasette-cluster-map) plugin to view your checkins on a map. ","

swarm-to-sqlite

Create a SQLite database containing your checkin history from Foursquare Swarm.

How to install

$ pip install swarm-to-sqlite

Usage

You will need to first obtain a valid OAuth token for your Foursquare account. You can do so using this tool: https://your-foursquare-oauth-token.glitch.me/

Simplest usage is to simply provide the name of the database file you wish to write to. The tool will prompt you to paste in your token, and will then download your checkins and store them in the specified database file.

$ swarm-to-sqlite checkins.db
Please provide your Foursquare OAuth token:
Importing 3699 checkins  [#########-----------------------] 27% 00:02:31

You can also pass the token as a command-line option:

$ swarm-to-sqlite checkins.db --token=XXX

Or as an environment variable:

$ export FOURSQUARE_TOKEN=XXX
$ swarm-to-sqlite checkins.db

To retrieve just checkins within the past X hours, days or weeks, use the --since= option. For example, to pull only checkins that happened within the last 10 days use:

$ swarm-to-sqlite checkins.db --token=XXX --since=10d

Use 2w for two weeks, 10h for ten hours, 3d for three days.

In addition to saving the checkins to a database, you can also write them to a JSON file using the --save option:

$ swarm-to-sqlite checkins.db --save=checkins.json

Having done this, you can re-import checkins directly from that file (rather than making API calls to fetch data from Foursquare) like this:

$ swarm-to-sqlite checkins.db --load=checkins.json

Using with Datasette

The SQLite database produced by this tool is designed to be browsed using Datasette.

You can install the datasette-cluster-map plugin to view your checkins on a map.

",,,,,, 220716822,MDEwOlJlcG9zaXRvcnkyMjA3MTY4MjI=,datasette-render-markdown,simonw/datasette-render-markdown,0,9599,https://github.com/simonw/datasette-render-markdown,Datasette plugin for rendering Markdown,0,2019-11-09T23:28:31Z,2022-05-26T04:58:56Z,2022-07-18T19:35:10Z,,57,11,11,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""markdown""]",0,1,11,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-render-markdown [![PyPI](https://img.shields.io/pypi/v/datasette-render-markdown.svg)](https://pypi.org/project/datasette-render-markdown/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-render-markdown?include_prereleases&label=changelog)](https://github.com/simonw/datasette-render-markdown/releases) [![Tests](https://github.com/simonw/datasette-render-markdown/workflows/Test/badge.svg)](https://github.com/simonw/datasette-render-markdown/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-render-markdown/blob/main/LICENSE) Datasette plugin for rendering Markdown. ## Installation Install this plugin in the same environment as Datasette to enable this new functionality: $ pip install datasette-render-markdown ## Usage You can explicitly list the columns you would like to treat as Markdown using [plugin configuration](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file. Add a `""datasette-render-markdown""` configuration block and use a `""columns""` key to list the columns you would like to treat as Markdown values: ```json { ""plugins"": { ""datasette-render-markdown"": { ""columns"": [""body""] } } } ``` This will cause any `body` column in any table to be treated as markdown and safely rendered using [Python-Markdown](https://python-markdown.github.io/). The resulting HTML is then run through [Bleach](https://bleach.readthedocs.io/) to avoid the risk of XSS security problems. Save this to `metadata.json` and run Datasette with the `--metadata` flag to load this configuration: $ datasette serve mydata.db --metadata metadata.json The configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the `entries` table in the `news.db` database: ```json { ""databases"": { ""news"": { ""tables"": { ""entries"": { ""plugins"": { ""datasette-render-markdown"": { ""columns"": [""body""] } } } } } } } ``` And here's how to apply it to every `body` column in every table in the `news.db` database: ```json { ""databases"": { ""news"": { ""plugins"": { ""datasette-render-markdown"": { ""columns"": [""body""] } } } } } ``` ## Columns that match a naming convention This plugin can also render markdown in any columns that match a specific naming convention. By default, columns that have a name ending in `_markdown` will be rendered. You can try this out using the following query: ```sql select '# Hello there * This is a list * of items [And a link](https://github.com/simonw/datasette-render-markdown).' as demo_markdown ``` You can configure a different list of wildcard patterns using the `""patterns""` configuration key. Here's how to render columns that end in either `_markdown` or `_md`: ```json { ""plugins"": { ""datasette-render-markdown"": { ""patterns"": [""*_markdown"", ""*_md""] } } } ``` To disable wildcard column matching entirely, set `""patterns"": []` in your plugin metadata configuration. ## Markdown extensions The [Python-Markdown library](https://python-markdown.github.io/) that powers this plugin supports extensions, both [bundled](https://python-markdown.github.io/extensions/) and [third-party](https://github.com/Python-Markdown/markdown/wiki/Third-Party-Extensions). These can be used to enable additional Markdown features such as [table support](https://python-markdown.github.io/extensions/tables/). You can configure support for extensions using the `""extensions""` key in your plugin metadata configuration. Since extensions may introduce new HTML tags, you will also need to add those tags to the list of tags that are allowed by the [Bleach](https://bleach.readthedocs.io/) sanitizer. You can do that using the `""extra_tags""` key, and you can whitelist additional HTML attributes using `""extra_attrs""`. See [the Bleach documentation](https://bleach.readthedocs.io/en/latest/clean.html#allowed-tags-tags) for more information on this. Here's how to enable support for [Markdown tables](https://python-markdown.github.io/extensions/tables/): ```json { ""plugins"": { ""datasette-render-markdown"": { ""extensions"": [""tables""], ""extra_tags"": [""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""] } } } ``` ### GitHub-Flavored Markdown Enabling [GitHub-Flavored Markdown](https://help.github.com/en/github/writing-on-github) (useful for if you are working with data imported from GitHub using [github-to-sqlite](https://github.com/dogsheep/github-to-sqlite)) is a little more complicated. First, you will need to install the [py-gfm](https://py-gfm.readthedocs.io) package: $ pip install py-gfm Note that `py-gfm` has [a bug](https://github.com/Zopieux/py-gfm/issues/13) that causes it to pin to `Markdown<3.0` - so if you are using it you should install it _before_ installing `datasette-render-markdown` to ensure you get a compatibly version of that dependency. Now you can configure it like this. Note that the extension name is `mdx_gfm:GithubFlavoredMarkdownExtension` and you need to whitelist several extra HTML tags and attributes: ```json { ""plugins"": { ""datasette-render-markdown"": { ""extra_tags"": [ ""hr"", ""br"", ""details"", ""summary"", ""input"" ], ""extra_attrs"": { ""input"": [ ""type"", ""disabled"", ""checked"" ], }, ""extensions"": [ ""mdx_gfm:GithubFlavoredMarkdownExtension"" ] } } } ``` The `` attributes are needed to support rendering checkboxes in issue descriptions. ## Markdown in templates The plugin also adds a new template function: `render_markdown(value)`. You can use this in your templates like so: ```html+jinja {{ render_markdown("""""" # This is markdown * One * Two * Three """""") }} ``` You can load additional extensions and whitelist tags by passing extra arguments to the function like this: ```html+jinja {{ render_markdown("""""" ## Markdown table First Header | Second Header ------------- | ------------- Content Cell | Content Cell Content Cell | Content Cell """""", extensions=[""tables""], extra_tags=[""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""])) }} ``` ","

datasette-render-markdown

Datasette plugin for rendering Markdown.

Installation

Install this plugin in the same environment as Datasette to enable this new functionality:

$ pip install datasette-render-markdown

Usage

You can explicitly list the columns you would like to treat as Markdown using plugin configuration in a metadata.json file.

Add a ""datasette-render-markdown"" configuration block and use a ""columns"" key to list the columns you would like to treat as Markdown values:

{
    ""plugins"": {
        ""datasette-render-markdown"": {
            ""columns"": [""body""]
        }
    }
}

This will cause any body column in any table to be treated as markdown and safely rendered using Python-Markdown. The resulting HTML is then run through Bleach to avoid the risk of XSS security problems.

Save this to metadata.json and run Datasette with the --metadata flag to load this configuration:

$ datasette serve mydata.db --metadata metadata.json

The configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the entries table in the news.db database:

{
    ""databases"": {
        ""news"": {
            ""tables"": {
                ""entries"": {
                    ""plugins"": {
                        ""datasette-render-markdown"": {
                            ""columns"": [""body""]
                        }
                    }
                }
            }
        }
    }
}

And here's how to apply it to every body column in every table in the news.db database:

{
    ""databases"": {
        ""news"": {
            ""plugins"": {
                ""datasette-render-markdown"": {
                    ""columns"": [""body""]
                }
            }
        }
    }
}

Columns that match a naming convention

This plugin can also render markdown in any columns that match a specific naming convention.

By default, columns that have a name ending in _markdown will be rendered.

You can try this out using the following query:

select '# Hello there

* This is a list
* of items

[And a link](https://github.com/simonw/datasette-render-markdown).'
as demo_markdown

You can configure a different list of wildcard patterns using the ""patterns"" configuration key. Here's how to render columns that end in either _markdown or _md:

{
    ""plugins"": {
        ""datasette-render-markdown"": {
            ""patterns"": [""*_markdown"", ""*_md""]
        }
    }
}

To disable wildcard column matching entirely, set ""patterns"": [] in your plugin metadata configuration.

Markdown extensions

The Python-Markdown library that powers this plugin supports extensions, both bundled and third-party. These can be used to enable additional Markdown features such as table support.

You can configure support for extensions using the ""extensions"" key in your plugin metadata configuration.

Since extensions may introduce new HTML tags, you will also need to add those tags to the list of tags that are allowed by the Bleach sanitizer. You can do that using the ""extra_tags"" key, and you can whitelist additional HTML attributes using ""extra_attrs"". See the Bleach documentation for more information on this.

Here's how to enable support for Markdown tables:

{
    ""plugins"": {
        ""datasette-render-markdown"": {
            ""extensions"": [""tables""],
            ""extra_tags"": [""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""]
        }
    }
}

GitHub-Flavored Markdown

Enabling GitHub-Flavored Markdown (useful for if you are working with data imported from GitHub using github-to-sqlite) is a little more complicated.

First, you will need to install the py-gfm package:

$ pip install py-gfm

Note that py-gfm has a bug that causes it to pin to Markdown<3.0 - so if you are using it you should install it before installing datasette-render-markdown to ensure you get a compatibly version of that dependency.

Now you can configure it like this. Note that the extension name is mdx_gfm:GithubFlavoredMarkdownExtension and you need to whitelist several extra HTML tags and attributes:

{
    ""plugins"": {
        ""datasette-render-markdown"": {
            ""extra_tags"": [
                ""hr"",
                ""br"",
                ""details"",
                ""summary"",
                ""input""
            ],
            ""extra_attrs"": {
                ""input"": [
                    ""type"",
                    ""disabled"",
                    ""checked""
                ],
            },
            ""extensions"": [
                ""mdx_gfm:GithubFlavoredMarkdownExtension""
            ]
        }
    }
}

The <input type="""" checked disabled> attributes are needed to support rendering checkboxes in issue descriptions.

Markdown in templates

The plugin also adds a new template function: render_markdown(value). You can use this in your templates like so:

{{ render_markdown(""""""
# This is markdown

* One
* Two
* Three
"""""") }}

You can load additional extensions and whitelist tags by passing extra arguments to the function like this:

{{ render_markdown(""""""
## Markdown table

First Header  | Second Header
------------- | -------------
Content Cell  | Content Cell
Content Cell  | Content Cell
"""""", extensions=[""tables""],
    extra_tags=[""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""])) }}
",1,public,0,,0, 221802296,MDEwOlJlcG9zaXRvcnkyMjE4MDIyOTY=,datasette-template-sql,simonw/datasette-template-sql,0,9599,https://github.com/simonw/datasette-template-sql,Datasette plugin for executing SQL queries from templates,0,2019-11-14T23:05:34Z,2021-05-18T17:58:47Z,2021-05-18T17:58:44Z,https://datasette.io/plugins/datasette-template-sql,23,6,6,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,6,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-template-sql [![PyPI](https://img.shields.io/pypi/v/datasette-template-sql.svg)](https://pypi.org/project/datasette-template-sql/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-template-sql?include_prereleases&label=changelog)](https://github.com/simonw/datasette-template-sql/releases) [![Tests](https://github.com/simonw/datasette-template-sql/workflows/Test/badge.svg)](https://github.com/simonw/datasette-template-sql/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-template-sql/blob/main/LICENSE) Datasette plugin for executing SQL queries from templates. ## Examples [datasette.io](https://datasette.io/) uses this plugin extensively with [custom page templates](https://docs.datasette.io/en/stable/custom_templates.html#custom-pages), check out [simonw/datasette.io](https://github.com/simonw/datasette.io) to see how it works. [www.niche-museums.com](https://www.niche-museums.com/) uses this plugin to run a custom themed website on top of Datasette. The full source code for the site [is here](https://github.com/simonw/museums) - see also [niche-museums.com, powered by Datasette](https://simonwillison.net/2019/Nov/25/niche-museums/). [simonw/til](https://github.com/simonw/til) is another simple example, described in [Using a self-rewriting README powered by GitHub Actions to track TILs](https://simonwillison.net/2020/Apr/20/self-rewriting-readme/). ## Installation Run this command to install the plugin in the same environment as Datasette: $ pip install datasette-template-sql ## Usage This plugin makes a new function, `sql(sql_query)`, available to your Datasette templates. You can use it like this: ```html+jinja {% for row in sql(""select 1 + 1 as two, 2 * 4 as eight"") %} {% for key in row.keys() %} {{ key }}: {{ row[key] }}
{% endfor %} {% endfor %} ``` The plugin will execute SQL against the current database for the page in `database.html`, `table.html` and `row.html` templates. If a template does not have a current database (`index.html` for example) the query will execute against the first attached database. ### Queries with arguments You can construct a SQL query using `?` or `:name` parameter syntax by passing a list or dictionary as a second argument: ```html+jinja {% for row in sql(""select distinct topic from til order by topic"") %}

{{ row.topic }}

{% endfor %} ``` Here's the same example using the `:topic` style of parameters: ```html+jinja {% for row in sql(""select distinct topic from til order by topic"") %}

{{ row.topic }}

{% endfor %} ``` ### Querying a different database You can pass an optional `database=` argument to specify a named database to use for the query. For example, if you have attached a `news.db` database you could use this: ```html+jinja {% for article in sql( ""select headline, date, summary from articles order by date desc limit 5"", database=""news"" ) %}

{{ article.headline }}

{{ article.date }}

{{ article.summary }}

{% endfor %} ``` ","

datasette-template-sql

Datasette plugin for executing SQL queries from templates.

Examples

datasette.io uses this plugin extensively with custom page templates, check out simonw/datasette.io to see how it works.

www.niche-museums.com uses this plugin to run a custom themed website on top of Datasette. The full source code for the site is here - see also niche-museums.com, powered by Datasette.

simonw/til is another simple example, described in Using a self-rewriting README powered by GitHub Actions to track TILs.

Installation

Run this command to install the plugin in the same environment as Datasette:

$ pip install datasette-template-sql

Usage

This plugin makes a new function, sql(sql_query), available to your Datasette templates.

You can use it like this:

{% for row in sql(""select 1 + 1 as two, 2 * 4 as eight"") %}
    {% for key in row.keys() %}
        {{ key }}: {{ row[key] }}<br>
    {% endfor %}
{% endfor %}

The plugin will execute SQL against the current database for the page in database.html, table.html and row.html templates. If a template does not have a current database (index.html for example) the query will execute against the first attached database.

Queries with arguments

You can construct a SQL query using ? or :name parameter syntax by passing a list or dictionary as a second argument:

{% for row in sql(""select distinct topic from til order by topic"") %}
    <h2>{{ row.topic }}</h2>
    <ul>
        {% for til in sql(""select * from til where topic = ?"", [row.topic]) %}
            <li><a href=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li>
        {% endfor %}
    </ul>
{% endfor %}

Here's the same example using the :topic style of parameters:

{% for row in sql(""select distinct topic from til order by topic"") %}
    <h2>{{ row.topic }}</h2>
    <ul>
        {% for til in sql(""select * from til where topic = :topic"", {""topic"": row.topic}) %}
            <li><a href=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li>
        {% endfor %}
    </ul>
{% endfor %}

Querying a different database

You can pass an optional database= argument to specify a named database to use for the query. For example, if you have attached a news.db database you could use this:

{% for article in sql(
    ""select headline, date, summary from articles order by date desc limit 5"",
    database=""news""
) %}
    <h3>{{ article.headline }}</h2>
    <p class=""date"">{{ article.date }}</p>
    <p>{{ article.summary }}</p>
{% endfor %}
",,,,,, 245670670,MDEwOlJlcG9zaXRvcnkyNDU2NzA2NzA=,fec-to-sqlite,simonw/fec-to-sqlite,0,9599,https://github.com/simonw/fec-to-sqlite,Save FEC campaign finance data to a SQLite database,0,2020-03-07T16:52:49Z,2020-12-19T05:09:05Z,2020-03-07T18:21:48Z,,16,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""sqlite"", ""fec"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,1,8,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# fec-to-sqlite [![PyPI](https://img.shields.io/pypi/v/fec-to-sqlite.svg)](https://pypi.org/project/fec-to-sqlite/) [![CircleCI](https://circleci.com/gh/simonw/fec-to-sqlite.svg?style=svg)](https://circleci.com/gh/simonw/fec-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/fec-to-sqlite/blob/master/LICENSE) Create a SQLite database using FEC campaign contributions data. This tool builds on [fecfile](https://github.com/esonderegger/) by Evan Sonderegger. ## How to install $ pip install fec-to-sqlite ## Usage $ fec-to-sqlite filings filings.db 1146148 This fetches the filing with ID `1146148` and stores it in tables in a SQLite database called `filings.db`. It will create any tables it needs. You can pass more than one filing ID, separated by spaces. ","

fec-to-sqlite

Create a SQLite database using FEC campaign contributions data.

This tool builds on fecfile by Evan Sonderegger.

How to install

$ pip install fec-to-sqlite

Usage

$ fec-to-sqlite filings filings.db 1146148

This fetches the filing with ID 1146148 and stores it in tables in a SQLite database called filings.db. It will create any tables it needs.

You can pass more than one filing ID, separated by spaces.

",,,,,, 248999994,MDEwOlJlcG9zaXRvcnkyNDg5OTk5OTQ=,datasette-show-errors,simonw/datasette-show-errors,0,9599,https://github.com/simonw/datasette-show-errors,Datasette plugin for displaying error tracebacks,0,2020-03-21T15:06:04Z,2020-09-24T00:17:29Z,2020-09-01T00:32:23Z,,7,1,1,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""asgi"", ""datasette"", ""starlette"", ""datasette-plugin"", ""datasette-io""]",0,1,1,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,0,"# datasette-show-errors [![PyPI](https://img.shields.io/pypi/v/datasette-show-errors.svg)](https://pypi.org/project/datasette-show-errors/) [![CircleCI](https://circleci.com/gh/simonw/datasette-show-errors.svg?style=svg)](https://circleci.com/gh/simonw/datasette-show-errors) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-show-errors/blob/master/LICENSE) Datasette plugin for displaying error tracebacks. **This plugin does not work with current versions of Datasette.** See [issue #2](https://github.com/simonw/datasette-show-errors/issues/2). ## Installation pip install datasette-show-errors ## Usage Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page. Be careful not to use this in a context that might expose sensitive information. ","

datasette-show-errors

Datasette plugin for displaying error tracebacks.

This plugin does not work with current versions of Datasette. See issue #2.

Installation

pip install datasette-show-errors

Usage

Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page.

Be careful not to use this in a context that might expose sensitive information.

",,,,,, 272098486,MDEwOlJlcG9zaXRvcnkyNzIwOTg0ODY=,datasette-psutil,simonw/datasette-psutil,0,9599,https://github.com/simonw/datasette-psutil,Datasette plugin adding a /-/psutil debugging endpoint,0,2020-06-13T22:57:07Z,2022-03-07T15:36:30Z,2022-03-07T15:35:57Z,https://datasette.io/plugins/datasette-psutil,12,2,2,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""psutil""]",0,1,2,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-psutil [![PyPI](https://img.shields.io/pypi/v/datasette-psutil.svg)](https://pypi.org/project/datasette-psutil/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-psutil?include_prereleases&label=changelog)](https://github.com/simonw/datasette-psutil/releases) [![Tests](https://github.com/simonw/datasette-psutil/workflows/Test/badge.svg)](https://github.com/simonw/datasette-psutil/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-psutil/blob/main/LICENSE) Datasette plugin adding a `/-/psutil` debugging endpoint ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-psutil ## Usage Visit `/-/psutil` on your Datasette instance to see various information provided by [psutil](https://psutil.readthedocs.io/). ## Demo https://latest-with-plugins.datasette.io/-/psutil is a live demo of this plugin, hosted on Google Cloud Run. ","

datasette-psutil

Datasette plugin adding a /-/psutil debugging endpoint

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-psutil

Usage

Visit /-/psutil on your Datasette instance to see various information provided by psutil.

Demo

https://latest-with-plugins.datasette.io/-/psutil is a live demo of this plugin, hosted on Google Cloud Run.

",1,public,0,,, 273609879,MDEwOlJlcG9zaXRvcnkyNzM2MDk4Nzk=,datasette-saved-queries,simonw/datasette-saved-queries,0,9599,https://github.com/simonw/datasette-saved-queries,Datasette plugin that lets users save and execute queries,0,2020-06-20T00:20:42Z,2020-09-24T05:08:37Z,2020-08-15T23:38:46Z,,12,2,2,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-saved-queries [![PyPI](https://img.shields.io/pypi/v/datasette-saved-queries.svg)](https://pypi.org/project/datasette-saved-queries/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-saved-queries?label=changelog)](https://github.com/simonw/datasette-saved-queries/releases) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-saved-queries/blob/master/LICENSE) Datasette plugin that lets users save and execute queries ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-saved-queries ## Usage When the plugin is installed Datasette will automatically create a `saved_queries` table in the first connected database when it starts up. It also creates a `save_query` writable canned query which you can use to save new queries. Queries that you save will be added to the query list on the database page. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-saved-queries python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-saved-queries

Datasette plugin that lets users save and execute queries

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-saved-queries

Usage

When the plugin is installed Datasette will automatically create a saved_queries table in the first connected database when it starts up.

It also creates a save_query writable canned query which you can use to save new queries.

Queries that you save will be added to the query list on the database page.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-saved-queries
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 280500027,MDEwOlJlcG9zaXRvcnkyODA1MDAwMjc=,datasette-insert,simonw/datasette-insert,0,9599,https://github.com/simonw/datasette-insert,Datasette plugin for inserting and updating data,0,2020-07-17T18:40:34Z,2022-06-27T02:54:14Z,2022-07-22T17:52:23Z,,54,9,9,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-io"", ""datasette-plugin""]",0,1,9,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-insert [![PyPI](https://img.shields.io/pypi/v/datasette-insert.svg)](https://pypi.org/project/datasette-insert/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-insert?include_prereleases&label=changelog)](https://github.com/simonw/datasette-insert/releases) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-insert/blob/master/LICENSE) Datasette plugin for inserting and updating data ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-insert This plugin should always be deployed with additional configuration to prevent unauthenticated access, see notes below. If you are trying it out on your own local machine, you can `pip install` the [datasette-insert-unsafe](https://github.com/simonw/datasette-insert-unsafe) plugin to allow access without needing to set up authentication or permissions separately. ## Inserting data and creating tables Start datasette and make sure it has a writable SQLite database attached to it. If you have not yet created a database file you can use this: datasette data.db --create The `--create` option will create a new empty `data.db` database file if it does not already exist. The plugin adds an endpoint that allows data to be inserted or updated and tables to be created by POSTing JSON data to the following URL: /-/insert/name-of-database/name-of-table The JSON should look like this: ```json [ { ""id"": 1, ""name"": ""Cleopaws"", ""age"": 5 }, { ""id"": 2, ""name"": ""Pancakes"", ""age"": 5 } ] ``` The first time data is posted to the URL a table of that name will be created if it does not aready exist, with the desired columns. You can specify which column should be used as the primary key using the `?pk=` URL argument. Here's how to POST to a database and create a new table using the Python `requests` library: ```python import requests requests.post(""http://localhost:8001/-/insert/data/dogs?pk=id"", json=[ { ""id"": 1, ""name"": ""Cleopaws"", ""age"": 5 }, { ""id"": 2, ""name"": ""Pancakes"", ""age"": 4 } ]) ``` And here's how to do the same thing using `curl`: ``` curl --request POST \ --data '[ { ""id"": 1, ""name"": ""Cleopaws"", ""age"": 5 }, { ""id"": 2, ""name"": ""Pancakes"", ""age"": 4 } ]' \ 'http://localhost:8001/-/insert/data/dogs?pk=id' ``` Or by piping in JSON like so: cat dogs.json | curl --request POST -d @- \ 'http://localhost:8001/-/insert/data/dogs?pk=id' ### Inserting a single row If you are inserting a single row you can optionally send it as a dictionary rather than a list with a single item: ``` curl --request POST \ --data '{ ""id"": 1, ""name"": ""Cleopaws"", ""age"": 5 }' \ 'http://localhost:8001/-/insert/data/dogs?pk=id' ``` ### Automatically adding new columns If you send data to an existing table with keys that are not reflected by the existing columns, you will get an HTTP 400 error with a JSON response like this: ```json { ""status"": 400, ""error"": ""Unknown keys: 'foo'"", ""error_code"": ""unknown_keys"" } ``` If you add `?alter=1` to the URL you are posting to any missing columns will be automatically added: ``` curl --request POST \ --data '[ { ""id"": 3, ""name"": ""Boris"", ""age"": 1, ""breed"": ""Husky"" } ]' \ 'http://localhost:8001/-/insert/data/dogs?alter=1' ``` ## Upserting data An ""upsert"" operation can be used to partially update a record. With upserts you can send a subset of the keys and, if the ID matches the specified primary key, they will be used to update an existing record. Upserts can be sent to the `/-/upsert` API endpoint. This example will update the dog with ID=1's age from 5 to 7: ``` curl --request POST \ --data '{ ""id"": 1, ""age"": 7 }' \ 'http://localhost:3322/-/upsert/data/dogs?pk=id' ``` Like the `/-/insert` endpoint, the `/-/upsert` endpoint can accept an array of objects too. It also supports the `?alter=1` option. ## Permissions and authentication This plugin defaults to denying all access, to help ensure people don't accidentally deploy it on the open internet in an unsafe configuration. You can read about [Datasette's approach to authentication](https://datasette.readthedocs.io/en/stable/authentication.html) in the Datasette manual. You can install the `datasette-insert-unsafe` plugin to run in unsafe mode, where all access is allowed by default. I recommend using this plugin in conjunction with [datasette-auth-tokens](https://github.com/simonw/datasette-auth-tokens), which provides a mechanism for making authenticated calls using API tokens. You can then use [""allow"" blocks](https://datasette.readthedocs.io/en/stable/authentication.html#defining-permissions-with-allow-blocks) in the `datasette-insert` plugin configuration to specify which authenticated tokens are allowed to make use of the API. Here's an example `metadata.json` file which restricts access to the `/-/insert` API to an API token defined in an `INSERT_TOKEN` environment variable: ```json { ""plugins"": { ""datasette-insert"": { ""allow"": { ""bot"": ""insert-bot"" } }, ""datasette-auth-tokens"": { ""tokens"": [ { ""token"": { ""$env"": ""INSERT_TOKEN"" }, ""actor"": { ""bot"": ""insert-bot"" } } ] } } } ``` With this configuration in place you can start Datasette like this: INSERT_TOKEN=abc123 datasette data.db -m metadata.json You can now send data to the API using `curl` like this: ``` curl --request POST \ -H ""Authorization: Bearer abc123"" \ --data '[ { ""id"": 3, ""name"": ""Boris"", ""age"": 1, ""breed"": ""Husky"" } ]' \ 'http://localhost:8001/-/insert/data/dogs' ``` Or using the Python `requests` library like so: ```python requests.post( ""http://localhost:8001/-/insert/data/dogs"", json={""id"": 1, ""name"": ""Cleopaws"", ""age"": 5}, headers={""Authorization"": ""bearer abc123""}, ) ``` ### Finely grained permissions Using an `""allow""` block as described above grants full permission to the features enabled by the API. The API implements several new Datasett permissions, which other plugins can use to make more finely grained decisions. The full set of permissions are as follows: - `insert:all` - all permissions - this is used by the `""allow""` block described above. Argument: `database_name` - `insert:insert-update` - the ability to insert data into an existing table, or to update data by its primary key. Arguments: `(database_name, table_name)` - `insert:create-table` - the ability to create a new table. Argument: `database_name` - `insert:alter-table` - the ability to add columns to an existing table (using `?alter=1`). Arguments: `(database_name, table_name)` You can use plugins like [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to hook into these more detailed permissions for finely grained control over what actions each authenticated actor can take. Plugins that implement the [permission_allowed()](https://datasette.readthedocs.io/en/stable/plugin_hooks.html#plugin-hook-permission-allowed) plugin hook can take full control over these permission decisions. ## CORS If you start Datasette with the `datasette --cors` option the following HTTP headers will be added to resources served by this plugin: Access-Control-Allow-Origin: * Access-Control-Allow-Headers: content-type,authorization Access-Control-Allow-Methods: POST ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-insert python3 -m venv venv source venv/bin/activate Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-insert

Datasette plugin for inserting and updating data

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-insert

This plugin should always be deployed with additional configuration to prevent unauthenticated access, see notes below.

If you are trying it out on your own local machine, you can pip install the datasette-insert-unsafe plugin to allow access without needing to set up authentication or permissions separately.

Inserting data and creating tables

Start datasette and make sure it has a writable SQLite database attached to it. If you have not yet created a database file you can use this:

datasette data.db --create

The --create option will create a new empty data.db database file if it does not already exist.

The plugin adds an endpoint that allows data to be inserted or updated and tables to be created by POSTing JSON data to the following URL:

/-/insert/name-of-database/name-of-table

The JSON should look like this:

[
    {
        ""id"": 1,
        ""name"": ""Cleopaws"",
        ""age"": 5
    },
    {
        ""id"": 2,
        ""name"": ""Pancakes"",
        ""age"": 5
    }
]

The first time data is posted to the URL a table of that name will be created if it does not aready exist, with the desired columns.

You can specify which column should be used as the primary key using the ?pk= URL argument.

Here's how to POST to a database and create a new table using the Python requests library:

import requests

requests.post(""http://localhost:8001/-/insert/data/dogs?pk=id"", json=[
    {
        ""id"": 1,
        ""name"": ""Cleopaws"",
        ""age"": 5
    },
    {
        ""id"": 2,
        ""name"": ""Pancakes"",
        ""age"": 4
    }
])

And here's how to do the same thing using curl:

curl --request POST \
  --data '[
      {
        ""id"": 1,
        ""name"": ""Cleopaws"",
        ""age"": 5
      },
      {
        ""id"": 2,
        ""name"": ""Pancakes"",
        ""age"": 4
      }
    ]' \
    'http://localhost:8001/-/insert/data/dogs?pk=id'

Or by piping in JSON like so:

cat dogs.json | curl --request POST -d @- \
    'http://localhost:8001/-/insert/data/dogs?pk=id'

Inserting a single row

If you are inserting a single row you can optionally send it as a dictionary rather than a list with a single item:

curl --request POST \
  --data '{
      ""id"": 1,
      ""name"": ""Cleopaws"",
      ""age"": 5
    }' \
    'http://localhost:8001/-/insert/data/dogs?pk=id'

Automatically adding new columns

If you send data to an existing table with keys that are not reflected by the existing columns, you will get an HTTP 400 error with a JSON response like this:

{
    ""status"": 400,
    ""error"": ""Unknown keys: 'foo'"",
    ""error_code"": ""unknown_keys""
}

If you add ?alter=1 to the URL you are posting to any missing columns will be automatically added:

curl --request POST \
  --data '[
      {
        ""id"": 3,
        ""name"": ""Boris"",
        ""age"": 1,
        ""breed"": ""Husky""
      }
    ]' \
    'http://localhost:8001/-/insert/data/dogs?alter=1'

Upserting data

An ""upsert"" operation can be used to partially update a record. With upserts you can send a subset of the keys and, if the ID matches the specified primary key, they will be used to update an existing record.

Upserts can be sent to the /-/upsert API endpoint.

This example will update the dog with ID=1's age from 5 to 7:

curl --request POST \
  --data '{
      ""id"": 1,
      ""age"": 7
    }' \
    'http://localhost:3322/-/upsert/data/dogs?pk=id'

Like the /-/insert endpoint, the /-/upsert endpoint can accept an array of objects too. It also supports the ?alter=1 option.

Permissions and authentication

This plugin defaults to denying all access, to help ensure people don't accidentally deploy it on the open internet in an unsafe configuration.

You can read about Datasette's approach to authentication in the Datasette manual.

You can install the datasette-insert-unsafe plugin to run in unsafe mode, where all access is allowed by default.

I recommend using this plugin in conjunction with datasette-auth-tokens, which provides a mechanism for making authenticated calls using API tokens.

You can then use ""allow"" blocks in the datasette-insert plugin configuration to specify which authenticated tokens are allowed to make use of the API.

Here's an example metadata.json file which restricts access to the /-/insert API to an API token defined in an INSERT_TOKEN environment variable:

{
    ""plugins"": {
        ""datasette-insert"": {
            ""allow"": {
                ""bot"": ""insert-bot""
            }
        },
        ""datasette-auth-tokens"": {
            ""tokens"": [
                {
                    ""token"": {
                        ""$env"": ""INSERT_TOKEN""
                    },
                    ""actor"": {
                        ""bot"": ""insert-bot""
                    }
                }
            ]
        }
    }
}

With this configuration in place you can start Datasette like this:

INSERT_TOKEN=abc123 datasette data.db -m metadata.json

You can now send data to the API using curl like this:

curl --request POST \
  -H ""Authorization: Bearer abc123"" \
  --data '[
      {
        ""id"": 3,
        ""name"": ""Boris"",
        ""age"": 1,
        ""breed"": ""Husky""
      }
    ]' \
    'http://localhost:8001/-/insert/data/dogs'

Or using the Python requests library like so:

requests.post(
    ""http://localhost:8001/-/insert/data/dogs"",
    json={""id"": 1, ""name"": ""Cleopaws"", ""age"": 5},
    headers={""Authorization"": ""bearer abc123""},
)

Finely grained permissions

Using an ""allow"" block as described above grants full permission to the features enabled by the API.

The API implements several new Datasett permissions, which other plugins can use to make more finely grained decisions.

The full set of permissions are as follows:

You can use plugins like datasette-permissions-sql to hook into these more detailed permissions for finely grained control over what actions each authenticated actor can take.

Plugins that implement the permission_allowed() plugin hook can take full control over these permission decisions.

CORS

If you start Datasette with the datasette --cors option the following HTTP headers will be added to resources served by this plugin:

Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type,authorization
Access-Control-Allow-Methods: POST

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-insert
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",1,public,0,,0, 291359358,MDEwOlJlcG9zaXRvcnkyOTEzNTkzNTg=,datasette-yaml,simonw/datasette-yaml,0,9599,https://github.com/simonw/datasette-yaml,Export Datasette records as YAML,0,2020-08-29T22:32:15Z,2020-12-28T03:20:36Z,2021-05-13T08:59:53Z,,7,2,2,Python,1,1,1,1,0,1,0,0,1,,"[""yaml"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",1,1,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,1,"# datasette-yaml [![PyPI](https://img.shields.io/pypi/v/datasette-yaml.svg)](https://pypi.org/project/datasette-yaml/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-yaml?include_prereleases&label=changelog)](https://github.com/simonw/datasette-yaml/releases) [![Tests](https://github.com/simonw/datasette-yaml/workflows/Test/badge.svg)](https://github.com/simonw/datasette-yaml/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-yaml/blob/main/LICENSE) Export Datasette records as YAML ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-yaml ## Usage Having installed this plugin, every table and query will gain a new `.yaml` export link. You can also construct these URLs directly: `/dbname/tablename.yaml` ## Demo The plugin is running on [covid-19.datasettes.com](https://covid-19.datasettes.co/) - for example [/covid/latest_ny_times_counties_with_populations.yaml](https://covid-19.datasettes.com/covid/latest_ny_times_counties_with_populations.yaml) ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-yaml python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-yaml

Export Datasette records as YAML

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-yaml

Usage

Having installed this plugin, every table and query will gain a new .yaml export link.

You can also construct these URLs directly: /dbname/tablename.yaml

Demo

The plugin is running on covid-19.datasettes.com - for example /covid/latest_ny_times_counties_with_populations.yaml

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-yaml
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 312934001,MDEwOlJlcG9zaXRvcnkzMTI5MzQwMDE=,datasette-indieauth,simonw/datasette-indieauth,0,9599,https://github.com/simonw/datasette-indieauth,Datasette authentication using IndieAuth and RelMeAuth,0,2020-11-15T01:18:21Z,2022-10-25T01:00:43Z,2022-10-25T01:34:47Z,,51,8,8,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""indieauth""]",0,1,8,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,3,,,1,public,0,,0, 327087207,MDEwOlJlcG9zaXRvcnkzMjcwODcyMDc=,datasette-css-properties,simonw/datasette-css-properties,0,9599,https://github.com/simonw/datasette-css-properties,Experimental Datasette output plugin using CSS properties,0,2021-01-05T18:38:07Z,2021-01-12T17:43:11Z,2021-01-07T22:07:19Z,,10,12,12,Python,1,1,1,1,0,0,0,0,1,,"[""datasette-plugin"", ""datasette-io""]",0,1,12,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-css-properties [![PyPI](https://img.shields.io/pypi/v/datasette-css-properties.svg)](https://pypi.org/project/datasette-css-properties/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-css-properties?include_prereleases&label=changelog)](https://github.com/simonw/datasette-css-properties/releases) [![Tests](https://github.com/simonw/datasette-css-properties/workflows/Test/badge.svg)](https://github.com/simonw/datasette-css-properties/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-css-properties/blob/main/LICENSE) Extremely experimental Datasette output plugin using CSS properties, inspired by [Custom Properties as State](https://css-tricks.com/custom-properties-as-state/) by Chris Coyier. More about this project: [APIs from CSS without JavaScript: the datasette-css-properties plugin](https://simonwillison.net/2021/Jan/7/css-apis-no-javascript/) ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-css-properties ## Usage Once installed, this plugin adds a `.css` output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property: Example: https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css produces: ```css :root { --pk: '1'; --name: 'The Mystery Spot'; --address: '465 Mystery Spot Road, Santa Cruz, CA 95065'; --latitude: '37.0167'; --longitude: '-122.0024'; } ``` If you link this stylesheet to your page you can then do things like this; ```html

Attraction name:

``` Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the `?_raw=column-name` parameter. This can be passed multiple times. Consider [this example query](https://latest-with-plugins.datasette.io/github?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B): ```sql select '#' || substr(sha, 0, 6) as [custom-bg] from commits order by author_date desc limit 1; ``` This returns the first 6 characters of the most recently authored commit with a `#` prefix. The `.css` [output rendered version](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B) looks like this: ```css :root { --custom-bg: '#97fb1'; } ``` Adding `?_raw=custom-bg` to the URL produces [this instead](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B&_raw=custom-bg): ```css :root { --custom-bg: #97fb1; } ``` This can then be used as a color value like so: ```css h1 { background-color: var(--custom-bg); } ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-css-properties python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-css-properties

Extremely experimental Datasette output plugin using CSS properties, inspired by Custom Properties as State by Chris Coyier.

More about this project: APIs from CSS without JavaScript: the datasette-css-properties plugin

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-css-properties

Usage

Once installed, this plugin adds a .css output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property:

Example: https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css produces:

:root {
  --pk: '1';
  --name: 'The Mystery Spot';
  --address: '465 Mystery Spot Road, Santa Cruz, CA 95065';
  --latitude: '37.0167';
  --longitude: '-122.0024';
}

If you link this stylesheet to your page you can then do things like this;

<link rel=""stylesheet"" href=""https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css"">
<style>
.attraction-name:after { content: var(--name); }
</style>
<p class=""attraction-name"">Attraction name: </p>

Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the ?_raw=column-name parameter. This can be passed multiple times.

Consider this example query:

select
  '#' || substr(sha, 0, 6) as [custom-bg]
from
  commits
order by
  author_date desc
limit
  1;

This returns the first 6 characters of the most recently authored commit with a # prefix. The .css output rendered version looks like this:

:root {
  --custom-bg: '#97fb1';
}

Adding ?_raw=custom-bg to the URL produces this instead:

:root {
  --custom-bg: #97fb1;
}

This can then be used as a color value like so:

h1 {
    background-color: var(--custom-bg);
}

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-css-properties
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 375546675,MDEwOlJlcG9zaXRvcnkzNzU1NDY2NzU=,datasette-placekey,simonw/datasette-placekey,0,9599,https://github.com/simonw/datasette-placekey,SQL functions for working with placekeys,0,2021-06-10T02:31:27Z,2021-06-10T02:33:22Z,2021-06-10T02:32:42Z,https://datasette.io/plugins/datasette-placekey,3,0,0,Python,1,1,1,1,0,0,0,0,1,,"[""datasette"", ""datasette-plugin"", ""datasette-io"", ""placekey""]",0,1,0,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-placekey [![PyPI](https://img.shields.io/pypi/v/datasette-placekey.svg)](https://pypi.org/project/datasette-placekey/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-placekey?include_prereleases&label=changelog)](https://github.com/simonw/datasette-placekey/releases) [![Tests](https://github.com/simonw/datasette-placekey/workflows/Test/badge.svg)](https://github.com/simonw/datasette-placekey/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-placekey/blob/main/LICENSE) SQL functions for working with [placekeys](https://www.placekey.io/). ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-placekey ## Usage The following SQL functions are exposed - [documentation here](https://placekey.github.io/placekey-py/placekey.html#module-placekey.placekey). ```sql select geo_to_placekey(33.0896104,129.7900839), placekey_to_geo('@6nh-nhh-kvf'), placekey_to_geo_latitude('@6nh-nhh-kvf'), placekey_to_geo_longitude('@6nh-nhh-kvf'), placekey_to_h3('@6nh-nhh-kvf'), h3_to_placekey('8a30d94e4c87fff'), placekey_to_geojson('@6nh-nhh-kvf'), placekey_to_wkt('@6nh-nhh-kvf'), placekey_format_is_valid('@6nh-nhh-kvf'); ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-placekey python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-placekey

SQL functions for working with placekeys.

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-placekey

Usage

The following SQL functions are exposed - documentation here.

select
  geo_to_placekey(33.0896104,129.7900839),
  placekey_to_geo('@6nh-nhh-kvf'),
  placekey_to_geo_latitude('@6nh-nhh-kvf'),
  placekey_to_geo_longitude('@6nh-nhh-kvf'),
  placekey_to_h3('@6nh-nhh-kvf'),
  h3_to_placekey('8a30d94e4c87fff'),
  placekey_to_geojson('@6nh-nhh-kvf'),
  placekey_to_wkt('@6nh-nhh-kvf'),
  placekey_format_is_valid('@6nh-nhh-kvf');

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-placekey
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,,