pypi_packages
142 rows
This data as json, CSV (advanced)
Suggested facets: classifiers, author, requires_python, classifiers (array), requires_dist (array)
name ▼ | summary | classifiers | description | author | author_email | description_content_type | home_page | keywords | license | maintainer | maintainer_email | package_url | platform | project_url | project_urls | release_url | requires_dist | requires_python | version | yanked | yanked_reason |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
airtable-export | Export Airtable data to files on disk | [] | # airtable-export [](https://pypi.org/project/airtable-export/) [](https://github.com/simonw/airtable-export/releases) [](https://github.com/simonw/airtable-export/actions?query=workflow%3ATest) [](https://github.com/simonw/airtable-export/blob/master/LICENSE) Export Airtable data to files on disk ## Installation Install this tool using `pip`: $ pip install airtable-export ## Usage You will need to know the following information: - Your Airtable base ID - this is a string starting with `app...` - Your Airtable API key - this is a string starting with `key...` - The names of each of the tables that you wish to export You can export all of your data to a folder called `export/` by running the following: airtable-export export base_id table1 table2 --key=key This example would create two files: `export/table1.yml` and `export/table2.yml`. Rather than passing the API key using the `--key` option you can set it as an environment variable called `AIRTABLE_KEY`. ## Export options By default the tool exports your data as YAML. You can also export as JSON or as [newline delimited JSON](http://ndjson.org/) using the `--json` or `--ndjson` options: airtable-export export base_id table1 table2 --key=key --ndjson You can pass multiple format options at once. This command will create a `.json`, `.yml` and `.ndjson` file for each exported table: airtable-export export base_id table1 table2 \ --key=key --ndjson --yaml --json ### SQLite database export You can export tables to a SQLite database file using the `--sqlite database.db` option: airtable-export export base_id table1 table2 \ --key=key --sqlite database.db This can be combined with other format options.… | Simon Willison | text/markdown | https://github.com/simonw/airtable-export | Apache License, Version 2.0 | https://pypi.org/project/airtable-export/ | https://pypi.org/project/airtable-export/ | {"CI": "https://github.com/simonw/airtable-export/actions", "Changelog": "https://github.com/simonw/airtable-export/releases", "Homepage": "https://github.com/simonw/airtable-export", "Issues": "https://github.com/simonw/airtable-export/issues"} | https://pypi.org/project/airtable-export/0.7.1/ | ["click", "PyYAML", "httpx", "sqlite-utils", "pytest ; extra == 'test'", "pytest-mock ; extra == 'test'"] | 0.7.1 | 0 | |||||||
csv-diff | Python CLI tool and library for diffing CSV and JSON files | ["Development Status :: 4 - Beta", "Intended Audience :: Developers", "Intended Audience :: End Users/Desktop", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7"] | # csv-diff [](https://pypi.org/project/csv-diff/) [](https://github.com/simonw/csv-diff/releases) [](https://github.com/simonw/csv-diff/actions?query=workflow%3ATest) [](https://github.com/simonw/csv-diff/blob/main/LICENSE) Tool for viewing the difference between two CSV, TSV or JSON files. See [Generating a commit log for San Francisco’s official list of trees](https://simonwillison.net/2019/Mar/13/tree-history/) (and the [sf-tree-history repo commit log](https://github.com/simonw/sf-tree-history/commits)) for background information on this project. ## Installation pip install csv-diff ## Usage Consider two CSV files: `one.csv` id,name,age 1,Cleo,4 2,Pancakes,2 `two.csv` id,name,age 1,Cleo,5 3,Bailey,1 `csv-diff` can show a human-readable summary of differences between the files: $ csv-diff one.csv two.csv --key=id 1 row changed, 1 row added, 1 row removed 1 row changed Row 1 age: "4" => "5" 1 row added id: 3 name: Bailey age: 1 1 row removed id: 2 name: Pancakes age: 2 The `--key=id` option means that the `id` column should be treated as the unique key, to identify which records have changed. The tool will automatically detect if your files are comma- or tab-separated. You can over-ride this automatic detection and force the tool to use a specific format using `--format=tsv` or `--format=csv`. You can also feed it JSON files, provided they are a JSON array of objects where each object has the same keys. Use `--format=json` if your input files are JSON. Use `--show-unchanged` to include full details of the unchanged values for rows with at least one change in the diff output: % csv-diff one.csv two.c… | Simon Willison | text/markdown | https://github.com/simonw/csv-diff | Apache License, Version 2.0 | https://pypi.org/project/csv-diff/ | https://pypi.org/project/csv-diff/ | {"Homepage": "https://github.com/simonw/csv-diff"} | https://pypi.org/project/csv-diff/1.1/ | ["click", "dictdiffer", "pytest ; extra == 'test'"] | 1.1 | 0 | |||||||
csvs-to-sqlite | Convert CSV files into a SQLite database | ["Intended Audience :: Developers", "Intended Audience :: End Users/Desktop", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Topic :: Database"] | # csvs-to-sqlite [](https://pypi.org/project/csvs-to-sqlite/) [](https://github.com/simonw/csvs-to-sqlite/releases) [](https://github.com/simonw/csvs-to-sqlite/actions?query=workflow%3ATest) [](https://github.com/simonw/csvs-to-sqlite/blob/main/LICENSE) Convert CSV files into a SQLite database. Browse and publish that SQLite database with [Datasette](https://github.com/simonw/datasette). Basic usage: csvs-to-sqlite myfile.csv mydatabase.db This will create a new SQLite database called `mydatabase.db` containing a single table, `myfile`, containing the CSV content. You can provide multiple CSV files: csvs-to-sqlite one.csv two.csv bundle.db The `bundle.db` database will contain two tables, `one` and `two`. This means you can use wildcards: csvs-to-sqlite ~/Downloads/*.csv my-downloads.db If you pass a path to one or more directories, the script will recursively search those directories for CSV files and create tables for each one. csvs-to-sqlite ~/path/to/directory all-my-csvs.db ## Handling TSV (tab-separated values) You can use the `-s` option to specify a different delimiter. If you want to use a tab character you'll need to apply shell escaping like so: csvs-to-sqlite my-file.tsv my-file.db -s $'\t' ## Refactoring columns into separate lookup tables Let's say you have a CSV file that looks like this: county,precinct,office,district,party,candidate,votes Clark,1,President,,REP,John R. Kasich,5 Clark,2,President,,REP,John R. Kasich,0 Clark,3,President,,REP,John R. Kasich,7 ([Real example taken from the Open Elections project](https://github.com/openelections/openelections-data-sd/blob/master/2016/20160607__sd__primary__clark__precinct.csv)) You can n… | Simon Willison | text/markdown | https://github.com/simonw/csvs-to-sqlite | Apache License, Version 2.0 | https://pypi.org/project/csvs-to-sqlite/ | https://pypi.org/project/csvs-to-sqlite/ | {"Homepage": "https://github.com/simonw/csvs-to-sqlite"} | https://pypi.org/project/csvs-to-sqlite/1.3/ | ["click (~=7.0)", "dateparser (>=1.0)", "pandas (>=1.0)", "py-lru-cache (~=0.1.4)", "six", "pytest ; extra == 'test'", "cogapp ; extra == 'test'"] | 1.3 | 0 | |||||||
datasette | An open source multi-tool for exploring and publishing data | ["Development Status :: 4 - Beta", "Framework :: Datasette", "Intended Audience :: Developers", "Intended Audience :: End Users/Desktop", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Topic :: Database"] | <img src="https://datasette.io/static/datasette-logo.svg" alt="Datasette"> [](https://pypi.org/project/datasette/) [](https://docs.datasette.io/en/stable/changelog.html) [](https://pypi.org/project/datasette/) [](https://github.com/simonw/datasette/actions?query=workflow%3ATest) [](https://docs.datasette.io/en/latest/?badge=latest) [](https://github.com/simonw/datasette/blob/main/LICENSE) [](https://hub.docker.com/r/datasetteproject/datasette) [](https://discord.gg/ktd74dm5mw) *An open source multi-tool for exploring and publishing data* Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API. Datasette is aimed at data journalists, museum curators, archivists, local governments, scientists, researchers and anyone else who has data that they wish to share with the world. [Explore a demo](https://global-power-plants.datasettes.com/global-power-plants/global-power-plants), watch [a video about the project](https://simonwillison.net/2021/Feb/7/video/) or try it out by [uploading and publishing your own CSV data](https://docs.datasette.io/en/stable/getting_started.html#try-datasette-without-installing-anything-using-glitch). * [datasette.io](https://datasette.io/) is the official project website * Latest [Datasette News](https://datasette.io/news) * Comprehensive documentation: https://docs.dat… | Simon Willison | text/markdown | https://datasette.io/ | Apache License, Version 2.0 | https://pypi.org/project/datasette/ | https://pypi.org/project/datasette/ | {"CI": "https://github.com/simonw/datasette/actions?query=workflow%3ATest", "Changelog": "https://docs.datasette.io/en/stable/changelog.html", "Documentation": "https://docs.datasette.io/en/stable/", "Homepage": "https://datasette.io/", "Issues": "https://github.com/simonw/datasette/issues", "Live demo": "https://latest.datasette.io/", "Source code": "https://github.com/simonw/datasette"} | https://pypi.org/project/datasette/0.63.1/ | ["asgiref (>=3.2.10)", "click (>=7.1.1)", "click-default-group-wheel (>=1.2.2)", "Jinja2 (>=2.10.3)", "hupper (>=1.9)", "httpx (>=0.20)", "pint (>=0.9)", "pluggy (>=1.0)", "uvicorn (>=0.11)", "aiofiles (>=0.4)", "janus (>=0.6.2)", "asgi-csrf (>=0.9)", "PyYAML (>=5.3)", "mergedeep (>=1.1.1)", "itsdangerous (>=1.1)", "furo (==2022.9.29) ; extra == 'docs'", "sphinx-autobuild ; extra == 'docs'", "codespell ; extra == 'docs'", "blacken-docs ; extra == 'docs'", "sphinx-copybutton ; extra == 'docs'", "rich ; extra == 'rich'", "pytest (>=5.2.2) ; extra == 'test'", "pytest-xdist (>=2.2.1) ; extra == 'test'", "pytest-asyncio (>=0.17) ; extra == 'test'", "beautifulsoup4 (>=4.8.1) ; extra == 'test'", "black (==22.10.0) ; extra == 'test'", "blacken-docs (==1.12.1) ; extra == 'test'", "pytest-timeout (>=1.4.2) ; extra == 'test'", "trustme (>=0.7) ; extra == 'test'", "cogapp (>=3.3.0) ; extra == 'test'"] | >=3.7 | 0.63.1 | 0 | ||||||
datasette-atom | Datasette plugin that adds a .atom output format | [] | # datasette-atom [](https://pypi.org/project/datasette-atom/) [](https://github.com/simonw/datasette-atom/releases) [](https://github.com/simonw/datasette-atom/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-atom/blob/main/LICENSE) Datasette plugin that adds support for generating [Atom feeds](https://validator.w3.org/feed/docs/atom.html) with the results of a SQL query. ## Installation Install this plugin in the same environment as Datasette to enable the `.atom` output extension. $ pip install datasette-atom ## Usage To create an Atom feed you need to define a custom SQL query that returns a required set of columns: * `atom_id` - a unique ID for each row. [This article](https://web.archive.org/web/20080211143232/http://diveintomark.org/archives/2004/05/28/howto-atom-id) has suggestions about ways to create these IDs. * `atom_title` - a title for that row. * `atom_updated` - an [RFC 3339](http://www.faqs.org/rfcs/rfc3339.html) timestamp representing the last time the entry was modified in a significant way. This can usually be the time that the row was created. The following columns are optional: * `atom_content` - content that should be shown in the feed. This will be treated as a regular string, so any embedded HTML tags will be escaped when they are displayed. * `atom_content_html` - content that should be shown in the feed. This will be treated as an HTML string, and will be sanitized using [Bleach](https://github.com/mozilla/bleach) to ensure it does not have any malicious code in it before being returned as part of a `<content type="html">` Atom element. If both are provided, this will be used in place of `atom_content`. * `atom_link` - a URL that should be used a… | Simon Willison | text/markdown | https://github.com/simonw/datasette-atom | Apache License, Version 2.0 | https://pypi.org/project/datasette-atom/ | https://pypi.org/project/datasette-atom/ | {"CI": "https://github.com/simonw/datasette-atom/actions", "Changelog": "https://github.com/simonw/datasette-atom/releases", "Homepage": "https://github.com/simonw/datasette-atom", "Issues": "https://github.com/simonw/datasette-atom/issues"} | https://pypi.org/project/datasette-atom/0.8.1/ | ["datasette (>=0.49)", "feedgen", "bleach", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 0.8.1 | 0 | |||||||
datasette-auth-existing-cookies | Datasette plugin that authenticates users based on existing domain cookies | [] | # datasette-auth-existing-cookies [](https://pypi.org/project/datasette-auth-existing-cookies/) [](https://circleci.com/gh/simonw/datasette-auth-existing-cookies) [](https://google.com/simonw/datasette-auth-existing-cookies/blob/master/LICENSE) Datasette plugin that authenticates users based on existing domain cookies. ## When to use this This plugin allows you to build custom authentication for Datasette when you are hosting a Datasette instance on the same domain as another, authenticated website. Consider a website on `www.example.com` which supports user authentication. You could run Datasette on `data.example.com` in a way that lets it see cookies that were set for the `.example.com` domain. Using this plugin, you could build an API endpoint at `www.example.com/user-for-cookies` which returns a JSON object representing the currently signed-in user, based on their cookies. The plugin can protect any hits to any `data.example.com` pages by passing their cookies through to that API and seeing if the user should be logged in or not. You can also use subclassing to decode existing cookies using some other mechanism. ## Configuration This plugin requires some configuration in the Datasette [metadata.json file](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration). It needs to know the following: * Which domain cookies should it be paying attention to? If you are authenticating against Dango this is probably `["sessionid"]`. * What's an API it can send the incoming cookies to that will decipher them into some user information? * Where should it redirect the user if they need to sign in? Example configuration setting all three of these values looks like this: ```json { "plugins": { "datasette-auth-existing-cookies": { "api_ur… | Simon Willison | text/markdown | https://github.com/simonw/datasette-auth-existing-cookies | Apache License, Version 2.0 | https://pypi.org/project/datasette-auth-existing-cookies/ | https://pypi.org/project/datasette-auth-existing-cookies/ | {"Homepage": "https://github.com/simonw/datasette-auth-existing-cookies"} | https://pypi.org/project/datasette-auth-existing-cookies/0.7/ | ["appdirs", "httpx", "itsdangerous", "datasette ; extra == 'test'", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "asgiref (~=3.1.2) ; extra == 'test'"] | >=3.6 | 0.7 | 0 | ||||||
datasette-auth-github | Datasette plugin and ASGI middleware that authenticates users against GitHub | [] | # datasette-auth-github [](https://pypi.org/project/datasette-auth-github/) [](https://github.com/simonw/datasette-auth-github/releases) [](https://github.com/simonw/datasette-auth-github/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-auth-github/blob/main/LICENSE) Datasette plugin that authenticates users against GitHub. <!-- toc --> - [Setup instructions](#setup-instructions) - [The authenticated actor](#the-authenticated-actor) - [Restricting access to specific users](#restricting-access-to-specific-users) - [Restricting access to specific GitHub organizations or teams](#restricting-access-to-specific-github-organizations-or-teams) - [What to do if a user is removed from an organization or team](#what-to-do-if-a-user-is-removed-from-an-organization-or-team) <!-- tocstop --> ## Setup instructions * Install the plugin: `datasette install datasette-auth-github` * Create a GitHub OAuth app: https://github.com/settings/applications/new * Set the Authorization callback URL to `http://127.0.0.1:8001/-/github-auth-callback` * Create a `metadata.json` file with the following structure: ```json { "title": "datasette-auth-github demo", "plugins": { "datasette-auth-github": { "client_id": {"$env": "GITHUB_CLIENT_ID"}, "client_secret": {"$env": "GITHUB_CLIENT_SECRET"} } } } ``` Now you can start Datasette like this, passing in the secrets as environment variables: $ GITHUB_CLIENT_ID=XXX GITHUB_CLIENT_SECRET=YYY datasette \ fixtures.db -m metadata.json Note that hard-coding secrets in `metadata.json` is a bad idea as they will be visible to anyone who can navigate to `/-/metadata`. Instead, we use Da… | Simon Willison | text/markdown | https://github.com/simonw/datasette-auth-github | Apache License, Version 2.0 | https://pypi.org/project/datasette-auth-github/ | https://pypi.org/project/datasette-auth-github/ | {"Homepage": "https://github.com/simonw/datasette-auth-github"} | https://pypi.org/project/datasette-auth-github/0.13.1/ | ["datasette (>=0.51)", "datasette ; extra == 'test'", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'", "pytest-httpx ; extra == 'test'"] | 0.13.1 | 0 | |||||||
datasette-auth-passwords | Datasette plugin for authenticating access using passwords | [] | # datasette-auth-passwords [](https://pypi.org/project/datasette-auth-passwords/) [](https://github.com/simonw/datasette-auth-passwords/releases) [](https://github.com/simonw/datasette-auth-passwords/blob/master/LICENSE) Datasette plugin for authenticating access using passwords ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-auth-passwords ## Demo A demo of this plugin is running at https://datasette-auth-passwords-demo.datasette.io/ The demo is configured to show the `public.db` database to everyone, but the `private.db` database only to logged in users. You can log in at https://datasette-auth-passwords-demo.datasette.io/-/login with username `root` and password `password!`. ## Usage This plugin works based on a list of username/password accounts that are hard-coded into the plugin configuration. First, you'll need to create a password hash. There are three ways to do that: - Install the plugin, then use the interactive tool located at `/-/password-tool` - Use the hosted version of that tool at https://datasette-auth-passwords-demo.datasette.io/-/password-tool - Use the `datasette hash-password` command, described below Now add the following to your `metadata.json`: ```json { "plugins": { "datasette-auth-passwords": { "someusername_password_hash": { "$env": "PASSWORD_HASH_1" } } } } ``` The password hash can now be specified in an environment variable when you run Datasette. You can do that like so: PASSWORD_HASH_1='pbkdf2_sha256$...' \ datasette -m metadata.json Be sure to use single quotes here otherwise the `$` symbols in the password hash may be incorrectly interpreted by your shell. You will now be able to log in to you… | Simon Willison | text/markdown | https://github.com/simonw/datasette-auth-passwords | Apache License, Version 2.0 | https://pypi.org/project/datasette-auth-passwords/ | https://pypi.org/project/datasette-auth-passwords/ | {"CI": "https://github.com/simonw/datasette-auth-passwords/actions", "Changelog": "https://github.com/simonw/datasette-auth-passwords/releases", "Homepage": "https://github.com/simonw/datasette-auth-passwords", "Issues": "https://github.com/simonw/datasette-auth-passwords/issues"} | https://pypi.org/project/datasette-auth-passwords/1.0/ | ["datasette (>=0.59)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 1.0 | 0 | |||||||
datasette-auth-tokens | Datasette plugin for authenticating access using API tokens | [] | # datasette-auth-tokens [](https://pypi.org/project/datasette-auth-tokens/) [](https://github.com/simonw/datasette-auth-tokens/releases) [](https://github.com/simonw/datasette-auth-tokens/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-auth-tokens/blob/main/LICENSE) Datasette plugin for authenticating access using API tokens ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-auth-tokens ## Hard-coded tokens Read about Datasette's [authentication and permissions system](https://datasette.readthedocs.io/en/latest/authentication.html). This plugin lets you configure secret API tokens which can be used to make authenticated requests to Datasette. First, create a random API token. A useful recipe for doing that is the following: $ python -c 'import secrets; print(secrets.token_hex(32))' 5f9a486dd807de632200b17508c75002bb66ca6fde1993db1de6cbd446362589 Decide on the actor that this token should represent, for example: ```json { "bot_id": "my-bot" } ``` You can then use `"allow"` blocks to provide that token with permission to access specific actions. To enable access to a configured writable SQL query you could use this in your `metadata.json`: ```json { "plugins": { "datasette-auth-tokens": { "tokens": [ { "token": { "$env": "BOT_TOKEN" }, "actor": { "bot_id": "my-bot" } } ] } }, "databases": { ":memory:": { "queries": { "show_version": { … | Simon Willison | text/markdown | https://github.com/simonw/datasette-auth-tokens | Apache License, Version 2.0 | https://pypi.org/project/datasette-auth-tokens/ | https://pypi.org/project/datasette-auth-tokens/ | {"CI": "https://github.com/simonw/datasette-auth-tokens/actions", "Changelog": "https://github.com/simonw/datasette-auth-tokens/releases", "Homepage": "https://github.com/simonw/datasette-auth-tokens", "Issues": "https://github.com/simonw/datasette-auth-tokens/issues"} | https://pypi.org/project/datasette-auth-tokens/0.3/ | ["datasette (>=0.44)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | 0.3 | 0 | |||||||
datasette-auth0 | Datasette plugin that authenticates users using Auth0 | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-auth0 [](https://pypi.org/project/datasette-auth0/) [](https://github.com/simonw/datasette-auth0/releases) [](https://github.com/simonw/datasette-auth0/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-auth0/blob/main/LICENSE) Datasette plugin that authenticates users using [Auth0](https://auth0.com/) See [Simplest possible OAuth authentication with Auth0](https://til.simonwillison.net/auth0/oauth-with-auth0) for more about how this plugin works. ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-auth0 ## Demo You can try this out at [datasette-auth0-demo.datasette.io](https://datasette-auth0-demo.datasette.io/) - click on the top right menu icon and select "Sign in with Auth0". ## Initial configuration First, create a new application in Auth0. You will need the domain, client ID and client secret for that application. The domain should be something like `mysite.us.auth0.com`. Add `http://127.0.0.1:8001/-/auth0-callback` to the list of Allowed Callback URLs. Then configure these plugin secrets using `metadata.yml`: ```yaml plugins: datasette-auth0: domain: "$env": AUTH0_DOMAIN client_id: "$env": AUTH0_CLIENT_ID client_secret: "$env": AUTH0_CLIENT_SECRET ``` Only the `client_secret` needs to be kept secret, but for consistency I recommend using the `$env` mechanism for all three. In development, you can run Datasette and pass in environment variables like this: ``` AUTH0_DOMAIN="your-domain.us.auth0.com" \ AUTH0_CLIENT_ID="...client-id-goes-here..." \ AUTH0_CLIENT_SECRET="...secret-goes-here..." \ datasette -m metadata.yml ``` If you are deploying using `datasette publi… | Simon Willison | text/markdown | https://github.com/simonw/datasette-auth0 | Apache License, Version 2.0 | https://pypi.org/project/datasette-auth0/ | https://pypi.org/project/datasette-auth0/ | {"CI": "https://github.com/simonw/datasette-auth0/actions", "Changelog": "https://github.com/simonw/datasette-auth0/releases", "Homepage": "https://github.com/simonw/datasette-auth0", "Issues": "https://github.com/simonw/datasette-auth0/issues"} | https://pypi.org/project/datasette-auth0/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "pytest-httpx ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-backup | Plugin adding backup options to Datasette | [] | # datasette-backup [](https://pypi.org/project/datasette-backup/) [](https://github.com/simonw/datasette-backup/releases) [](https://github.com/simonw/datasette-backup/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-backup/blob/main/LICENSE) Plugin adding backup options to Datasette ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-backup ## Usage Once installed, you can download a SQL backup of any of your databases from: /-/backup/dbname.sql ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-backup python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-backup | Apache License, Version 2.0 | https://pypi.org/project/datasette-backup/ | https://pypi.org/project/datasette-backup/ | {"CI": "https://github.com/simonw/datasette-backup/actions", "Changelog": "https://github.com/simonw/datasette-backup/releases", "Homepage": "https://github.com/simonw/datasette-backup", "Issues": "https://github.com/simonw/datasette-backup/issues"} | https://pypi.org/project/datasette-backup/0.1/ | ["datasette", "sqlite-dump (>=0.1.1)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | 0.1 | 0 | |||||||
datasette-basemap | A basemap for Datasette and datasette-leaflet | [] | # datasette-basemap [](https://pypi.org/project/datasette-basemap/) [](https://github.com/simonw/datasette-basemap/releases) [](https://github.com/simonw/datasette-basemap/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-basemap/blob/main/LICENSE) A basemap for Datasette and datasette-leaflet ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-basemap ## Usage This plugin will make a `basemap` database available containing OpenStreetMap tiles in the [mbtiles](https://github.com/mapbox/mbtiles-spec) format. It is designed for use with the [datasette-tiles](https://datasette.io/plugins/datasette-tiles) tile server plugin. ## Demo You can preview this map at https://datasette-tiles-demo.datasette.io/-/tiles/basemap and browse the database directly at https://datasette-tiles-demo.datasette.io/basemap ## License The data bundled with this package is © OpenStreetMap contributors, licensed under the [Open Data Commons Open Database License](https://opendatacommons.org/licenses/odbl/). See [this page](https://www.openstreetmap.org/copyright) for more details. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-basemap python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-basemap | Apache License, Version 2.0 | https://pypi.org/project/datasette-basemap/ | https://pypi.org/project/datasette-basemap/ | {"CI": "https://github.com/simonw/datasette-basemap/actions", "Changelog": "https://github.com/simonw/datasette-basemap/releases", "Homepage": "https://github.com/simonw/datasette-basemap", "Issues": "https://github.com/simonw/datasette-basemap/issues"} | https://pypi.org/project/datasette-basemap/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.2 | 0 | ||||||
datasette-block | Block all access to specific path prefixes | [] | # datasette-block [](https://pypi.org/project/datasette-block/) [](https://github.com/simonw/datasette-block/releases) [](https://github.com/simonw/datasette-block/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-block/blob/main/LICENSE) Block all access to specific path prefixes ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-block ## Configuration Add the following to `metadata.json` to block specific path prefixes: ```json { "plugins": { "datasette-block": { "prefixes": ["/all/"] } } } ``` This will cause a 403 error to be returned for any path beginning with `/all/`. This blocking happens as an ASGI wrapper around Datasette. ## Why would you need this? You almost always would not. I use it with `datasette-ripgrep` to block access to static assets for unauthenticated users. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-block python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-block | Apache License, Version 2.0 | https://pypi.org/project/datasette-block/ | https://pypi.org/project/datasette-block/ | {"CI": "https://github.com/simonw/datasette-block/actions", "Changelog": "https://github.com/simonw/datasette-block/releases", "Homepage": "https://github.com/simonw/datasette-block", "Issues": "https://github.com/simonw/datasette-block/issues"} | https://pypi.org/project/datasette-block/0.1.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "asgi-lifespan ; extra == 'test'"] | >=3.6 | 0.1.1 | 0 | ||||||
datasette-block-robots | Datasette plugin that blocks all robots using robots.txt | [] | # datasette-block-robots [](https://pypi.org/project/datasette-block-robots/) [](https://github.com/simonw/datasette-block-robots/releases) [](https://github.com/simonw/datasette-block-robots/blob/master/LICENSE) Datasette plugin that blocks robots and crawlers using robots.txt ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-block-robots ## Usage Having installed the plugin, `/robots.txt` on your Datasette instance will return the following: User-agent: * Disallow: / This will request all robots and crawlers not to visit any of the pages on your site. Here's a demo of the plugin in action: https://sqlite-generate-demo.datasette.io/robots.txt ## Configuration By default the plugin will block all access to the site, using `Disallow: /`. If you want the index page to be indexed by search engines without crawling the database, table or row pages themselves, you can use the following: ```json { "plugins": { "datasette-block-robots": { "allow_only_index": true } } } ``` This will return a `/robots.txt` like so: User-agent: * Disallow: /db1 Disallow: /db2 With a `Disallow` line for every attached database. To block access to specific areas of the site using custom paths, add this to your `metadata.json` configuration file: ```json { "plugins": { "datasette-block-robots": { "disallow": ["/mydatabase/mytable"] } } } ``` This will result in a `/robots.txt` that looks like this: User-agent: * Disallow: /mydatabase/mytable Alternatively you can set the full contents of the `robots.txt` file using the `literal` configuration option. Here's how to do that if you are using YAML rather than JSON and have a `metadata.yml` file: … | Simon Willison | text/markdown | https://github.com/simonw/datasette-block-robots | Apache License, Version 2.0 | https://pypi.org/project/datasette-block-robots/ | https://pypi.org/project/datasette-block-robots/ | {"CI": "https://github.com/simonw/datasette-block-robots/actions", "Changelog": "https://github.com/simonw/datasette-block-robots/releases", "Homepage": "https://github.com/simonw/datasette-block-robots", "Issues": "https://github.com/simonw/datasette-block-robots/issues"} | https://pypi.org/project/datasette-block-robots/1.1/ | ["datasette (>=0.50)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 1.1 | 0 | |||||||
datasette-bplist | Datasette plugin for working with Apple's binary plist format | [] | # datasette-bplist [](https://pypi.org/project/datasette-bplist/) [](https://circleci.com/gh/simonw/datasette-bplist) [](https://github.com/simonw/datasette-bplist/blob/master/LICENSE) Datasette plugin for working with Apple's [binary plist](https://en.wikipedia.org/wiki/Property_list) format. This plugin adds two features: a display hook and a SQL function. The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI. The SQL function `bplist_to_json(value)` can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's `json_extract()` function or with the [datasette-jq](https://github.com/simonw/datasette-jq) plugin to further analyze that data as part of a SQL query. Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-bplist ## Trying it out If you use a Mac you already have plenty of SQLite databases that contain binary plist data. One example is the database that powers the Apple Photos app. This database tends to be locked, so you will need to create a copy of the database in order to run queries against it: cp ~/Pictures/Photos\ Library.photoslibrary/database/photos.db /tmp/photos.db The database also makes use of custom SQLite extensions which prevent it from opening in Datasette. You can work around this by exporting the data that you want to experiment with into a new SQLite file. I recommend trying this plugin against the `RKMaster_dataNote` table, which contains plist-encoded EXIF metadata about the photos you have taken. You can export that table into a fresh database like so: sqlite3 /tmp/photos.db ".dump RKMaster_dataNote" | … | Simon Willison | text/markdown | https://github.com/simonw/datasette-bplist | Apache License, Version 2.0 | https://pypi.org/project/datasette-bplist/ | https://pypi.org/project/datasette-bplist/ | {"Homepage": "https://github.com/simonw/datasette-bplist"} | https://pypi.org/project/datasette-bplist/0.1/ | ["datasette", "bpylist", "pytest ; extra == 'test'"] | 0.1 | 0 | |||||||
datasette-clone | Create a local copy of database files from a Datasette instance | [] | # datasette-clone [](https://pypi.org/project/datasette-clone/) [](https://circleci.com/gh/simonw/datasette-clone) [](https://github.com/simonw/datasette-clone/blob/master/LICENSE) Create a local copy of database files from a Datasette instance. See [datasette-clone](https://simonwillison.net/2020/Apr/14/datasette-clone/) on my blog for background on this project. ## How to install $ pip install datasette-clone ## Usage This only works against Datasette instances running immutable databases (with the `-i` option). Databases published using the `datasette publish` command should be compatible with this tool. To download copies of all `.db` files from an instance, run: datasette-clone https://latest.datasette.io You can provide an optional second argument to specify a directory: datasette-clone https://latest.datasette.io /tmp/here-please The command stores its own copy of a `databases.json` manifest and uses it to only download databases that have changed the next time you run the command. It also stores a copy of the instance's `metadata.json` to ensure you have a copy of any source and licensing information for the downloaded databases. If your instance is protected by an API token, you can use `--token` to provide it: datasette-clone https://latest.datasette.io --token=xyz For verbose output showing what the tool is doing, use `-v`. | Simon Willison | text/markdown | https://github.com/simonw/datasette-clone | Apache License, Version 2.0 | https://pypi.org/project/datasette-clone/ | https://pypi.org/project/datasette-clone/ | {"Homepage": "https://github.com/simonw/datasette-clone"} | https://pypi.org/project/datasette-clone/0.5/ | ["requests", "click", "pytest ; extra == 'test'", "requests-mock ; extra == 'test'"] | 0.5 | 0 | |||||||
datasette-cluster-map | Datasette plugin that shows a map for any data with latitude/longitude columns | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-cluster-map [](https://pypi.org/project/datasette-cluster-map/) [](https://github.com/simonw/datasette-cluster-map/releases) [](https://github.com/simonw/datasette-cluster-map/blob/main/LICENSE) A [Datasette plugin](https://docs.datasette.io/en/stable/plugins.html) that detects tables with `latitude` and `longitude` columns and then plots them on a map using [Leaflet.markercluster](https://github.com/Leaflet/Leaflet.markercluster). More about this project: [Datasette plugins, and building a clustered map visualization](https://simonwillison.net/2018/Apr/20/datasette-plugins/). ## Demo [global-power-plants.datasettes.com](https://global-power-plants.datasettes.com/global-power-plants/global-power-plants) hosts a demo of this plugin running against a database of 33,000 power plants around the world.  ## Installation Run `datasette install datasette-cluster-map` to add this plugin to your Datasette virtual environment. Datasette will automatically load the plugin if it is installed in this way. If you are deploying using the `datasette publish` command you can use the `--install` option: datasette publish cloudrun mydb.db --install=datasette-cluster-map If any of your tables have a `latitude` and `longitude` column, a map will be automatically displayed. ## Configuration If your columns are called something else you can configure the column names using [plugin configuration](https://docs.datasette.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file. For example, if all of your columns are called `xlat` and `xlng` you can create a `metadata.json` file like this: ```json { "title": "Regular metadata keys can go here too", "p… | Simon Willison | text/markdown | https://github.com/simonw/datasette-cluster-map | Apache License, Version 2.0 | https://pypi.org/project/datasette-cluster-map/ | https://pypi.org/project/datasette-cluster-map/ | {"CI": "https://github.com/simonw/datasette-cluster-map/actions", "Changelog": "https://github.com/simonw/datasette-cluster-map/releases", "Homepage": "https://github.com/simonw/datasette-cluster-map", "Issues": "https://github.com/simonw/datasette-cluster-map/issues"} | https://pypi.org/project/datasette-cluster-map/0.17.2/ | ["datasette (>=0.54)", "datasette-leaflet (>=0.2.2)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | 0.17.2 | 0 | |||||||
datasette-column-inspect | Experimental Datasette plugin for inspecting columns | [] | # datasette-column-inspect [](https://pypi.org/project/datasette-column-inspect/) [](https://github.com/simonw/datasette-column-inspect/releases) [](https://github.com/simonw/datasette-column-inspect/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-column-inspect/blob/main/LICENSE) Highly experimental Datasette plugin for inspecting columns. ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-column-inspect ## Usage This plugin adds an icon to each column on the table page which opens an inspection side panel. | Simon Willison | text/markdown | https://github.com/simonw/datasette-column-inspect | Apache License, Version 2.0 | https://pypi.org/project/datasette-column-inspect/ | https://pypi.org/project/datasette-column-inspect/ | {"Homepage": "https://github.com/simonw/datasette-column-inspect"} | https://pypi.org/project/datasette-column-inspect/0.2a0/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | 0.2a0 | 0 | |||||||
datasette-configure-asgi | Datasette plugin for configuring arbitrary ASGI middleware | [] | # datasette-configure-asgi [](https://pypi.org/project/datasette-configure-asgi/) [](https://circleci.com/gh/simonw/datasette-configure-asgi) [](https://github.com/simonw/datasette-configure-asgi/blob/master/LICENSE) Datasette plugin for configuring arbitrary ASGI middleware ## Installation pip install datasette-configure-asgi ## Usage This plugin only takes effect if your `metadata.json` file contains relevant top-level plugin configuration in a `"datasette-configure-asgi"` configuration key. For example, to wrap your Datasette instance in the `asgi-log-to-sqlite` middleware configured to write logs to `/tmp/log.db` you would use the following: ```json { "plugins": { "datasette-configure-asgi": [ { "class": "asgi_log_to_sqlite.AsgiLogToSqlite", "args": { "file": "/tmp/log.db" } } ] } } ``` The `"datasette-configure-asgi"` key should be a list of JSON objects. Each object should have a `"class"` key indicating the class to be used, and an optional `"args"` key providing any necessary arguments to be passed to that class constructor. ## Plugin structure This plugin can be used to wrap your Datasette instance in any ASGI middleware that conforms to the following structure: ```python class SomeAsgiMiddleware: def __init__(self, app, arg1, arg2): self.app = app self.arg1 = arg1 self.arg2 = arg2 async def __call__(self, scope, receive, send): start = time.time() await self.app(scope, receive, send) end = time.time() print("Time taken: {}".format(end - start)) ``` So the middleware is a class with a constructor which takes the wrapped application as a first argument, `app`, followed by further named arguments … | Simon Willison | text/markdown | https://github.com/simonw/datasette-configure-asgi | Apache License, Version 2.0 | https://pypi.org/project/datasette-configure-asgi/ | https://pypi.org/project/datasette-configure-asgi/ | {"Homepage": "https://github.com/simonw/datasette-configure-asgi"} | https://pypi.org/project/datasette-configure-asgi/0.1/ | ["pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "asgiref (==3.1.2) ; extra == 'test'", "datasette ; extra == 'test'"] | 0.1 | 0 | |||||||
datasette-configure-fts | Datasette plugin for enabling full-text search against selected table columns | [] | # datasette-configure-fts [](https://pypi.org/project/datasette-configure-fts/) [](https://github.com/simonw/datasette-configure-fts/releases) [](https://github.com/simonw/datasette-configure-fts/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-configure-fts/blob/main/LICENSE) Datasette plugin for enabling full-text search against selected table columns ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-configure-fts ## Usage Having installed the plugin, visit `/-/configure-fts` on your Datasette instance to configure FTS for tables on attached writable databases. By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page. The `configure-fts` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface. | Simon Willison | text/markdown | https://github.com/simonw/datasette-configure-fts | Apache License, Version 2.0 | https://pypi.org/project/datasette-configure-fts/ | https://pypi.org/project/datasette-configure-fts/ | {"Homepage": "https://github.com/simonw/datasette-configure-fts"} | https://pypi.org/project/datasette-configure-fts/1.1/ | ["datasette (>=0.51)", "sqlite-utils (>=2.10)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 1.1 | 0 | |||||||
datasette-copy-to-memory | Copy database files into an in-memory database on startup | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-copy-to-memory [](https://pypi.org/project/datasette-copy-to-memory/) [](https://github.com/simonw/datasette-copy-to-memory/releases) [](https://github.com/simonw/datasette-copy-to-memory/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-copy-to-memory/blob/main/LICENSE) Copy database files into an in-memory database on startup This plugin is **highly experimental**. It currently exists to support Datasette performance research, and is not designed for actual production usage. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-copy-to-memory ## Usage On startup, Datasette will create an in-memory named database for each attached database. This database will have the same name but with `_memory` at the end. So running this: datasette fixtures.db Will serve two databases: the original at `/fixtures` and the in-memory copy at `/fixtures_memory`. ## Demo A demo is running on [latest-with-plugins.datasette.io](https://latest-with-plugins.datasette.io/) - the [/fixtures_memory](https://latest-with-plugins.datasette.io/fixtures_memory) table there is provided by this plugin. ## Configuration By default every attached database file will be loaded into a `_memory` copy. You can use plugin configuration to specify just a subset of the database. For example, to create `github_memory` but not `fixtures_memory` you would use the following `metadata.yml` file: ```yaml plugins: datasette-copy-to-memory: databases: - github ``` Then start Datasette like this: datasette github.db fixtures.db -m metadata.yml If you don't want to have a `fixtures` and `fixtures_memory` data… | Simon Willison | text/markdown | https://github.com/simonw/datasette-copy-to-memory | Apache License, Version 2.0 | https://pypi.org/project/datasette-copy-to-memory/ | https://pypi.org/project/datasette-copy-to-memory/ | {"CI": "https://github.com/simonw/datasette-copy-to-memory/actions", "Changelog": "https://github.com/simonw/datasette-copy-to-memory/releases", "Homepage": "https://github.com/simonw/datasette-copy-to-memory", "Issues": "https://github.com/simonw/datasette-copy-to-memory/issues"} | https://pypi.org/project/datasette-copy-to-memory/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.7 | 0.2 | 0 | ||||||
datasette-copyable | Datasette plugin for outputting tables in formats suitable for copy and paste | [] | # datasette-copyable [](https://pypi.org/project/datasette-copyable/) [](https://github.com/simonw/datasette-copyable/releases) [](https://github.com/simonw/datasette-copyable/blob/master/LICENSE) Datasette plugin for outputting tables in formats suitable for copy and paste ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-copyable ## Demo You can try this plugin on [fivethirtyeight.datasettes.com](https://fivethirtyeight.datasettes.com/) - browse for tables or queries there and look for the "copyable" link. Here's an example for a table of [airline safety data](https://fivethirtyeight.datasettes.com/fivethirtyeight/airline-safety%2Fairline-safety.copyable). ## Usage This plugin adds a `.copyable` output extension to every table, view and query. Navigating to this page will show an interface allowing you to select a format for copying and pasting the demo. The default is TSV, which is suitable for copying into Google Sheets or Excel. You can add `?_raw=1` to get back just the raw data. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-copyable python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-copyable | Apache License, Version 2.0 | https://pypi.org/project/datasette-copyable/ | https://pypi.org/project/datasette-copyable/ | {"CI": "https://github.com/simonw/datasette-copyable/actions", "Changelog": "https://github.com/simonw/datasette-copyable/releases", "Homepage": "https://github.com/simonw/datasette-copyable", "Issues": "https://github.com/simonw/datasette-copyable/issues"} | https://pypi.org/project/datasette-copyable/0.3.1/ | ["datasette (>=0.49)", "tabulate", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | 0.3.1 | 0 | |||||||
datasette-cors | Datasette plugin for configuring CORS headers | [] | # datasette-cors [](https://pypi.org/project/datasette-cors/) [](https://circleci.com/gh/simonw/datasette-cors) [](https://github.com/simonw/datasette-cors/blob/master/LICENSE) Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors You can use this plugin to allow JavaScript running on a whitelisted set of domains to make `fetch()` calls to the JSON API provided by your Datasette instance. ## Installation pip install datasette-cors ## Configuration You need to add some configuration to your Datasette `metadata.json` file for this plugin to take effect. To whitelist specific domains, use this: ```json { "plugins": { "datasette-cors": { "hosts": ["https://www.example.com"] } } } ``` You can also whitelist patterns like this: ```json { "plugins": { "datasette-cors": { "host_wildcards": ["https://*.example.com"] } } } ``` ## Testing it To test this plugin out, run it locally by saving one of the above examples as `metadata.json` and running this: $ datasette --memory -m metadata.json Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following: ```javascript fetch("http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29").then(r => r.json()).then(console.log) ``` If the plugin is running correctly, you will see the JSON response output to the console. | Simon Willison | text/markdown | https://github.com/simonw/datasette-cors | Apache License, Version 2.0 | https://pypi.org/project/datasette-cors/ | https://pypi.org/project/datasette-cors/ | {"Homepage": "https://github.com/simonw/datasette-cors"} | https://pypi.org/project/datasette-cors/0.3/ | ["asgi-cors (~=0.3)", "datasette (~=0.29) ; extra == 'test'", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "asgiref (~=3.1.2) ; extra == 'test'"] | 0.3 | 0 | |||||||
datasette-css-properties | Experimental Datasette output plugin using CSS properties | [] | # datasette-css-properties [](https://pypi.org/project/datasette-css-properties/) [](https://github.com/simonw/datasette-css-properties/releases) [](https://github.com/simonw/datasette-css-properties/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-css-properties/blob/main/LICENSE) Extremely experimental Datasette output plugin using CSS properties, inspired by [Custom Properties as State](https://css-tricks.com/custom-properties-as-state/) by Chris Coyier. ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-css-properties ## Usage Once installed, this plugin adds a `.css` output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property: Example: https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css produces: ```css :root { --pk: '1'; --name: 'The Mystery Spot'; --address: '465 Mystery Spot Road, Santa Cruz, CA 95065'; --latitude: '37.0167'; --longitude: '-122.0024'; } ``` If you link this stylesheet to your page you can then do things like this; ```html <link rel="stylesheet" href="https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css"> <style> .attraction-name:after { content: var(--name); } </style> <p class="attraction-name">Attraction name: </p> ``` Values will be quoted as CSS strings by default. If you want to return a "raw" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the `?_raw=column-name` parameter. This can be passed multiple times. Consider [this example query](http… | Simon Willison | text/markdown | https://github.com/simonw/datasette-css-properties | Apache License, Version 2.0 | https://pypi.org/project/datasette-css-properties/ | https://pypi.org/project/datasette-css-properties/ | {"CI": "https://github.com/simonw/datasette-css-properties/actions", "Changelog": "https://github.com/simonw/datasette-css-properties/releases", "Homepage": "https://github.com/simonw/datasette-css-properties", "Issues": "https://github.com/simonw/datasette-css-properties/issues"} | https://pypi.org/project/datasette-css-properties/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.2 | 0 | ||||||
datasette-dashboards | Datasette plugin providing data dashboards from metadata | [] | # datasette-dashboards > Datasette plugin providing data dashboards from metadata [](https://pypi.org/project/datasette-dashboards/) [](https://github.com/rclement/datasette-dashboards/actions/workflows/ci-cd.yml) [](https://codecov.io/gh/rclement/datasette-dashboards) [](https://github.com/rclement/datasette-dashboards/blob/master/LICENSE) Try out a live demo at [https://datasette-dashboards-demo.vercel.app](https://datasette-dashboards-demo.vercel.app/-/dashboards) **WARNING**: this plugin is still experimental and not ready for production. Some breaking changes might happen between releases before reaching a stable version. Use it at your own risks!  ## Installation Install this plugin in the same environment as Datasette: ```bash $ datasette install datasette-dashboards ``` ## Usage Define dashboards within `metadata.yml` / `metadata.json`: ```yaml plugins: datasette-dashboards: my-dashboard: title: My Dashboard description: Showing some nice metrics layout: - [analysis-note, events-count] - [analysis-note, events-source] filters: date_start: name: Date Start type: date default: "2021-01-01" date_end: name: Date End type: date charts: analysis-note: library: markdown display: |- # Analysis notes > A quick rundown of events statistics and KPIs events-count: title: Total number of events db: jobs query: SELECT count(*) as count FROM events … | Romain Clement | text/markdown | https://github.com/rclement/datasette-dashboards | Apache License, Version 2.0 | https://pypi.org/project/datasette-dashboards/ | https://pypi.org/project/datasette-dashboards/ | {"Changelog": "https://github.com/rclement/datasette-dashboards/blob/master/CHANGELOG.md", "Homepage": "https://github.com/rclement/datasette-dashboards"} | https://pypi.org/project/datasette-dashboards/0.2.0/ | ["datasette", "datasette-render-markdown", "faker ; extra == 'test'", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.7 | 0.2.0 | 0 | ||||||
datasette-dateutil | dateutil functions for Datasette | [] | # datasette-dateutil [](https://pypi.org/project/datasette-dateutil/) [](https://github.com/simonw/datasette-dateutil/releases) [](https://github.com/simonw/datasette-dateutil/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-dateutil/blob/main/LICENSE) dateutil functions for Datasette ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-dateutil ## Usage This function adds custom SQL functions that expose functionality from the [dateutil](https://dateutil.readthedocs.io/) Python library. Once installed, the following SQL functions become available: ### Parsing date strings - `dateutil_parse(text)` - returns an ISO8601 date string parsed from the text, or `null` if the input could not be parsed. `dateutil_parse("10 october 2020 3pm")` returns `2020-10-10T15:00:00`. - `dateutil_parse_fuzzy(text)` - same as `dateutil_parse()` but this also works against strings that contain a date somewhere within them - that date will be returned, or `null` if no dates could be found. `dateutil_parse_fuzzy("This is due 10 september")` returns `2020-09-10T00:00:00` (but will start returning the 2021 version of that if the year is 2021). The `dateutil_parse()` and `dateutil_parse_fuzzy()` functions both follow the American convention of assuming that `1/2/2020` lists the month first, evaluating this example to the 2nd of January. If you want to assume that the day comes first, use these two functions instead: - `dateutil_parse_dayfirst(text)` - `dateutil_parse_fuzzy_dayfirst(text)` Here's a query demonstrating these functions: ```sql select dateutil_parse("10 october 2020 3pm"), dateutil_parse_fuzzy("This is due 10 septem… | Simon Willison | text/markdown | https://github.com/simonw/datasette-dateutil | Apache License, Version 2.0 | https://pypi.org/project/datasette-dateutil/ | https://pypi.org/project/datasette-dateutil/ | {"CI": "https://github.com/simonw/datasette-dateutil/actions", "Changelog": "https://github.com/simonw/datasette-dateutil/releases", "Homepage": "https://github.com/simonw/datasette-dateutil", "Issues": "https://github.com/simonw/datasette-dateutil/issues"} | https://pypi.org/project/datasette-dateutil/0.3/ | ["datasette", "python-dateutil", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 0.3 | 0 | |||||||
datasette-debug-asgi | Datasette plugin for dumping out the ASGI scope | [] | # datasette-debug-asgi [](https://pypi.org/project/datasette-debug-asgi/) [](https://github.com/simonw/datasette-debug-asgi/releases) [](https://github.com/simonw/datasette-debug-asgi/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-debug-asgi/blob/main/LICENSE) Datasette plugin for dumping out the ASGI scope. Adds a new URL at `/-/asgi-scope` which shows the current ASGI scope. Demo here: https://datasette.io/-/asgi-scope ## Installation pip install datasette-debug-asgi ## Usage Visit `/-/asgi-scope` to see debug output showing the ASGI scope. You can add query string parameters such as `/-/asgi-scope?q=hello`. You can also add extra path components such as `/-/asgi-scope/more/path/here`. | Simon Willison | text/markdown | https://github.com/simonw/datasette-debug-asgi | Apache License, Version 2.0 | https://pypi.org/project/datasette-debug-asgi/ | https://pypi.org/project/datasette-debug-asgi/ | {"CI": "https://github.com/simonw/datasette-debug-asgi/actions", "Changelog": "https://github.com/simonw/datasette-debug-asgi/releases", "Homepage": "https://github.com/simonw/datasette-debug-asgi", "Issues": "https://github.com/simonw/datasette-debug-asgi/issues"} | https://pypi.org/project/datasette-debug-asgi/1.1/ | ["datasette (>=0.50)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 1.1 | 0 | ||||||
datasette-edit-schema | Datasette plugin for modifying table schemas | [] | # datasette-edit-schema [](https://pypi.org/project/datasette-edit-schema/) [](https://github.com/simonw/datasette-edit-schema/releases) [](https://github.com/simonw/datasette-edit-schema/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-edit-schema/blob/master/LICENSE) Datasette plugin for modifying table schemas ## Features * Add new columns to a table * Rename columns in a table * Modify the type of columns in a table * Re-order the columns in a table * Rename a table * Delete a table ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-edit-schema ## Usage Navigate to `/-/edit-schema/dbname/tablename` on your Datasette instance to edit a specific table. Use `/-/edit-schema/dbname` to create a new table in a specific database. By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page. ## Permissions The `edit-schema` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface. These permission checks will call the `permission_allowed()` plugin hook with three arguments: - `action` will be the string `"edit-schema"` - `actor` will be the currently authenticated actor - usually a dictionary - `resource` will be the string name of the database ## Screenshot ", "sqlite-utils (>=3.10)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.5.1 | 0 | ||||||
datasette-edit-templates | Plugin allowing Datasette templates to be edited within Datasette | [] | # datasette-edit-templates [](https://pypi.org/project/datasette-edit-templates/) [](https://github.com/simonw/datasette-edit-templates/releases) [](https://github.com/simonw/datasette-edit-templates/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-edit-templates/blob/main/LICENSE) Plugin allowing Datasette templates to be edited within Datasette. ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-edit-templates ## Usage Once installed, sign in as the root user using `datasette mydb.db --root`. On startup. a `_templates_` table will be created in the database you are running Datasette against. Use the app menu to navigate to the `/-/edit-templates` page, and edit templates there. Changes should become visible instantly, and will be persisted to your database. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-edit-templates python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-edit-templates | Apache License, Version 2.0 | https://pypi.org/project/datasette-edit-templates/ | https://pypi.org/project/datasette-edit-templates/ | {"CI": "https://github.com/simonw/datasette-edit-templates/actions", "Changelog": "https://github.com/simonw/datasette-edit-templates/releases", "Homepage": "https://github.com/simonw/datasette-edit-templates", "Issues": "https://github.com/simonw/datasette-edit-templates/issues"} | https://pypi.org/project/datasette-edit-templates/0.1/ | ["datasette (>=0.63)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-export-notebook | Datasette plugin providing instructions for exporting data to Jupyter or Observable | [] | # datasette-export-notebook [](https://pypi.org/project/datasette-export-notebook/) [](https://github.com/simonw/datasette-export-notebook/releases) [](https://github.com/simonw/datasette-export-notebook/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-export-notebook/blob/main/LICENSE) Datasette plugin providing instructions for exporting data to a [Jupyter](https://jupyter.org/) or [Observable](https://observablehq.com/) notebook. ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-export-notebook ## Usage Once installed, the plugin will add a `.Notebook` export option to every table and query. Clicking on this link will show instructions for exporting the data to Jupyter or Observable. ## Demo You can see this plugin in action on the [latest-with-plugins.datasette.io](https://latest-with-plugins.datasette.io/) Datasette instance - for example on [/github/commits.Notebook](https://latest-with-plugins.datasette.io/github/commits.Notebook). ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-export-notebook python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-export-notebook | Apache License, Version 2.0 | https://pypi.org/project/datasette-export-notebook/ | https://pypi.org/project/datasette-export-notebook/ | {"CI": "https://github.com/simonw/datasette-export-notebook/actions", "Changelog": "https://github.com/simonw/datasette-export-notebook/releases", "Homepage": "https://github.com/simonw/datasette-export-notebook", "Issues": "https://github.com/simonw/datasette-export-notebook/issues"} | https://pypi.org/project/datasette-export-notebook/1.0/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.6 | 1.0 | 0 | ||||||
datasette-expose-env | Datasette plugin to expose selected environment variables at /-/env for debugging | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-expose-env [](https://pypi.org/project/datasette-expose-env/) [](https://github.com/simonw/datasette-expose-env/releases) [](https://github.com/simonw/datasette-expose-env/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-expose-env/blob/main/LICENSE) Datasette plugin to expose selected environment variables at `/-/env` for debugging ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-expose-env ## Configuration Decide on a list of environment variables you would like to expose, then add the following to your `metadata.yml` configuration: ```yaml plugins: datasette-expose-env: - ENV_VAR_1 - ENV_VAR_2 - ENV_VAR_3 ``` If you are using JSON in a `metadata.json` file use the following: ```json { "plugins": { "datasette-expose-env": [ "ENV_VAR_1", "ENV_VAR_2", "ENV_VAR_3" ] } } ``` Visit `/-/env` on your Datasette instance to see the values of the environment variables. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-expose-env python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-expose-env | Apache License, Version 2.0 | https://pypi.org/project/datasette-expose-env/ | https://pypi.org/project/datasette-expose-env/ | {"CI": "https://github.com/simonw/datasette-expose-env/actions", "Changelog": "https://github.com/simonw/datasette-expose-env/releases", "Homepage": "https://github.com/simonw/datasette-expose-env", "Issues": "https://github.com/simonw/datasette-expose-env/issues"} | https://pypi.org/project/datasette-expose-env/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-external-links-new-tabs | Datasette plugin to open external links in new tabs | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-external-links-new-tabs [](https://pypi.org/project/datasette-external-links-new-tabs/) [](https://github.com/ocdtrekkie/datasette-external-links-new-tabs/releases) [](https://github.com/ocdtrekkie/datasette-external-links-new-tabs/actions?query=workflow%3ATest) [](https://github.com/ocdtrekkie/datasette-external-links-new-tabs/blob/main/LICENSE) Datasette plugin to open external links in new tabs ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-external-links-new-tabs ## Usage There are no usage instructions, it simply opens external links in a new tab. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-external-links-new-tabs python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Jacob Weisz | text/markdown | https://github.com/ocdtrekkie/datasette-external-links-new-tabs | Apache License, Version 2.0 | https://pypi.org/project/datasette-external-links-new-tabs/ | https://pypi.org/project/datasette-external-links-new-tabs/ | {"CI": "https://github.com/ocdtrekkie/datasette-external-links-new-tabs/actions", "Changelog": "https://github.com/ocdtrekkie/datasette-external-links-new-tabs/releases", "Homepage": "https://github.com/ocdtrekkie/datasette-external-links-new-tabs", "Issues": "https://github.com/ocdtrekkie/datasette-external-links-new-tabs/issues"} | https://pypi.org/project/datasette-external-links-new-tabs/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-geojson | Add GeoJSON as an output option | [] | # datasette-geojson [](https://pypi.org/project/datasette-geojson/) [](https://github.com/eyeseast/datasette-geojson/releases) [](https://github.com/eyeseast/datasette-geojson/actions?query=workflow%3ATest) [](https://github.com/eyeseast/datasette-geojson/blob/main/LICENSE) Add GeoJSON as an output option for datasette queries. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-geojson ## Usage To render GeoJSON, add a `.geojson` extension to any query URL that includes a `geometry` column. That column should be a valid [GeoJSON geometry](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1). For example, you might use [geojson-to-sqlite](https://pypi.org/project/geojson-to-sqlite/) or [shapefile-to-sqlite](https://pypi.org/project/shapefile-to-sqlite/) to load [neighborhood boundaries](https://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0/explore) into a SQLite database. ```sh wget -O neighborhoods.geojson https://opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson geojson-to-sqlite boston.db neighborhoods neighborhoods.geojson --spatial-index # create a spatial index datasette serve boston.db --load-extension spatialite ``` If you're using Spatialite, the geometry column will be in a binary format. If not, make sure the `geometry` column is a well-formed [GeoJSON geometry](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1). If you used `geojson-to-sqlite` or `shapefile-to-sqlite`, you should be all set. Run this query in Datasette and you'll see a link to download GeoJSON: ```sql select rowid, OBJECTID, Name, Acres, Neighborhood_ID, SqMiles, … | Chris Amico | text/markdown | https://github.com/eyeseast/datasette-geojson | Apache License, Version 2.0 | https://pypi.org/project/datasette-geojson/ | https://pypi.org/project/datasette-geojson/ | {"CI": "https://github.com/eyeseast/datasette-geojson/actions", "Changelog": "https://github.com/eyeseast/datasette-geojson/releases", "Homepage": "https://github.com/eyeseast/datasette-geojson", "Issues": "https://github.com/eyeseast/datasette-geojson/issues"} | https://pypi.org/project/datasette-geojson/0.3.1/ | ["datasette", "geojson", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "geojson-to-sqlite ; extra == 'test'"] | >=3.6 | 0.3.1 | 0 | ||||||
datasette-geojson-map | Render a map for any query with a geometry column | [] | # datasette-geojson-map [](https://pypi.org/project/datasette-geojson-map/) [](https://github.com/eyeseast/datasette-geojson-map/releases) [](https://github.com/eyeseast/datasette-geojson-map/actions?query=workflow%3ATest) [](https://github.com/eyeseast/datasette-geojson-map/blob/main/LICENSE) Render a map for any query with a geometry column ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-geojson-map ## Usage Start by loading a GIS file. For example, you might use [geojson-to-sqlite](https://pypi.org/project/geojson-to-sqlite/) or [shapefile-to-sqlite](https://pypi.org/project/shapefile-to-sqlite/) to load [neighborhood boundaries](https://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0/explore) into a SQLite database. ```sh wget -O neighborhoods.geojson https://opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson geojson-to-sqlite boston.db neighborhoods neighborhoods.geojson ``` (The command above uses Spatialite, but that's not required.) Start up `datasette` and navigate to the `neighborhoods` table. ```sh datasette serve boston.db # in another terminal tab open http://localhost:8001/boston/neighborhoods ``` You should see a map centered on Boston with each neighborhood outlined. Clicking a boundary will bring up a popup with details on that feature.  This plugin relies on (and will install) [datasette-geojson](https://github.com/eyeseast/datasette-geojson). Any query that includes a `geometry` column will produce a map of the results. This also includes single row views. Run the incl… | Chris Amico | text/markdown | https://github.com/eyeseast/datasette-geojson-map | Apache License, Version 2.0 | https://pypi.org/project/datasette-geojson-map/ | https://pypi.org/project/datasette-geojson-map/ | {"CI": "https://github.com/eyeseast/datasette-geojson-map/actions", "Changelog": "https://github.com/eyeseast/datasette-geojson-map/releases", "Homepage": "https://github.com/eyeseast/datasette-geojson-map", "Issues": "https://github.com/eyeseast/datasette-geojson-map/issues"} | https://pypi.org/project/datasette-geojson-map/0.4.0/ | ["datasette", "datasette-geojson", "datasette-leaflet", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "geojson-to-sqlite ; extra == 'test'"] | >=3.6 | 0.4.0 | 0 | ||||||
datasette-glitch | Utilities to help run Datasette on Glitch | [] | # datasette-glitch [](https://pypi.org/project/datasette-glitch/) [](https://github.com/simonw/datasette-glitch/releases) [](https://github.com/simonw/datasette-glitch/blob/master/LICENSE) Utilities to help run Datasette on Glitch ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-glitch ## Usage This plugin outputs a special link which will sign you into Datasette as the root user. Click Tools -> Logs in the Glitch editor interface after your app starts to see the link. | Simon Willison | text/markdown | https://github.com/simonw/datasette-glitch | Apache License, Version 2.0 | https://pypi.org/project/datasette-glitch/ | https://pypi.org/project/datasette-glitch/ | {"CI": "https://github.com/simonw/datasette-glitch/actions", "Changelog": "https://github.com/simonw/datasette-glitch/releases", "Homepage": "https://github.com/simonw/datasette-glitch", "Issues": "https://github.com/simonw/datasette-glitch/issues"} | https://pypi.org/project/datasette-glitch/0.1/ | ["datasette (>=0.45)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 0.1 | 0 | |||||||
datasette-graphql | Datasette plugin providing an automatic GraphQL API for your SQLite databases | [] | # datasette-graphql [](https://pypi.org/project/datasette-graphql/) [](https://github.com/simonw/datasette-graphql/releases) [](https://github.com/simonw/datasette-graphql/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-graphql/blob/main/LICENSE) **Datasette plugin providing an automatic GraphQL API for your SQLite databases** Read more about this project: [GraphQL in Datasette with the new datasette-graphql plugin](https://simonwillison.net/2020/Aug/7/datasette-graphql/) Try out a live demo at [datasette-graphql-demo.datasette.io/graphql](https://datasette-graphql-demo.datasette.io/graphql?query=%7B%0A%20%20repos(first%3A10%2C%20search%3A%20%22sql%22%2C%20sort_desc%3A%20created_at)%20%7B%0A%20%20%20%20totalCount%0A%20%20%20%20pageInfo%20%7B%0A%20%20%20%20%20%20endCursor%0A%20%20%20%20%20%20hasNextPage%0A%20%20%20%20%7D%0A%20%20%20%20nodes%20%7B%0A%20%20%20%20%20%20full_name%0A%20%20%20%20%20%20description_%0A%20%20%20%20%09stargazers_count%0A%20%20%20%20%20%20created_at%0A%20%20%20%20%20%20owner%20%7B%0A%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20html_url%0A%20%20%20%20%20%20%7D%0A%20%20%20%20%7D%0A%20%20%7D%0A%7D%0A) <!-- toc --> - [Installation](#installation) - [Usage](#usage) * [Querying for tables and columns](#querying-for-tables-and-columns) * [Fetching a single record](#fetching-a-single-record) * [Accessing nested objects](#accessing-nested-objects) * [Accessing related objects](#accessing-related-objects) * [Filtering tables](#filtering-tables) * [Sorting](#sorting) * [Pagination](#pagination) * [Search](#search) * [Columns containing JSON strings](#columns-containing-json-strings) * [Auto camelCase](#auto-camelcase) * [CORS](… | Simon Willison | text/markdown | https://github.com/simonw/datasette-graphql | Apache License, Version 2.0 | https://pypi.org/project/datasette-graphql/ | https://pypi.org/project/datasette-graphql/ | {"CI": "https://github.com/simonw/datasette-graphql/actions", "Changelog": "https://github.com/simonw/datasette-graphql/releases", "Homepage": "https://github.com/simonw/datasette-graphql", "Issues": "https://github.com/simonw/datasette-graphql/issues"} | https://pypi.org/project/datasette-graphql/2.1.1/ | ["datasette (>=0.58.1)", "graphene (<4.0,>=3.1.0)", "graphql-core (>=3.2.1)", "sqlite-utils", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | 2.1.1 | 0 | |||||||
datasette-gunicorn | Run a Datasette server using Gunicorn | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-gunicorn [](https://pypi.org/project/datasette-gunicorn/) [](https://github.com/simonw/datasette-gunicorn/releases) [](https://github.com/simonw/datasette-gunicorn/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-gunicorn/blob/main/LICENSE) Run a [Datasette](https://datasette.io/) server using [Gunicorn](https://gunicorn.org/) ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-gunicorn ## Usage The plugin adds a new `datasette gunicorn` command. This takes most of the same options as `datasette serve`, plus one more option for setting the number of Gunicorn workers to start: `-w/--workers X` - set the number of workers. Defaults to 1. To start serving a database using 4 workers, run the following: datasette gunicorn fixtures.db -w 4 It is advisable to switch your datasette [into WAL mode](https://til.simonwillison.net/sqlite/enabling-wal-mode) to get the best performance out of this configuration: sqlite3 fixtures.db 'PRAGMA journal_mode=WAL;' Run `datasette gunicorn --help` for a full list of options (which are the same as `datasette serve --help`, with the addition of the new `-w` option). ## datasette gunicorn --help Not all of the options to `datasette serve` are supported. Here's the full list of available options: <!-- [[[cog import cog from datasette import cli from click.testing import CliRunner runner = CliRunner() result = runner.invoke(cli.cli, ["gunicorn", "--help"]) help = result.output.replace("Usage: cli", "Usage: datasette") cog.out( "```\n{}\n```".format(help) ) ]]] --> ``` Usage: datasette gunicorn [OPTIONS] [FILES]... Start a Gunicorn server running to serve … | Simon Willison | text/markdown | https://github.com/simonw/datasette-gunicorn | Apache License, Version 2.0 | https://pypi.org/project/datasette-gunicorn/ | https://pypi.org/project/datasette-gunicorn/ | {"CI": "https://github.com/simonw/datasette-gunicorn/actions", "Changelog": "https://github.com/simonw/datasette-gunicorn/releases", "Homepage": "https://github.com/simonw/datasette-gunicorn", "Issues": "https://github.com/simonw/datasette-gunicorn/issues"} | https://pypi.org/project/datasette-gunicorn/0.1/ | ["datasette", "gunicorn", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "cogapp ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-gzip | Add gzip compression to Datasette | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-gzip [](https://pypi.org/project/datasette-gzip/) [](https://github.com/simonw/datasette-gzip/releases) [](https://github.com/simonw/datasette-gzip/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-gzip/blob/main/LICENSE) Add gzip compression to Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-gzip ## Usage Once installed, Datasette will obey the `Accept-Encoding:` header sent by browsers or other user agents and return content compressed in the most appropriate way. This plugin is a thin wrapper for the [asgi-gzip library](https://github.com/simonw/asgi-gzip), which extracts the [GzipMiddleware](https://www.starlette.io/middleware/#gzipmiddleware) from Starlette. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-gzip python3 -mvenv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-gzip | Apache License, Version 2.0 | https://pypi.org/project/datasette-gzip/ | https://pypi.org/project/datasette-gzip/ | {"CI": "https://github.com/simonw/datasette-gzip/actions", "Changelog": "https://github.com/simonw/datasette-gzip/releases", "Homepage": "https://github.com/simonw/datasette-gzip", "Issues": "https://github.com/simonw/datasette-gzip/issues"} | https://pypi.org/project/datasette-gzip/0.2/ | ["datasette", "asgi-gzip", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.2 | 0 | ||||||
datasette-hashed-urls | Optimize Datasette performance behind a caching proxy | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-hashed-urls [](https://pypi.org/project/datasette-hashed-urls/) [](https://github.com/simonw/datasette-hashed-urls/releases) [](https://github.com/simonw/datasette-hashed-urls/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-hashed-urls/blob/main/LICENSE) Optimize Datasette performance behind a caching proxy When you open a database file in immutable mode using the `-i` option, Datasette calculates a SHA-256 hash of the contents of that file on startup. This content hash can then optionally be used to create URLs that are guaranteed to change if the contents of the file changes in the future. The result is pages that can be cached indefinitely by both browsers and caching proxies - providing a significant performance boost. ## Demo A demo of this plugin is running at https://datasette-hashed-urls.vercel.app/ ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-hashed-urls ## Usage Once installed, this plugin will act on any immutable database files that are loaded into Datasette: datasette -i fixtures.db The database will automatically be renamed to incorporate a hash of the contents of the SQLite file - so the above database would be served as: http://127.0.0.1:8001/fixtures-aa7318b Every page that accesss that database, including JSON endpoints, will be served with the following far-future cache expiry header: cache-control: max-age=31536000, public Here `max-age=31536000` is the number of seconds in a year. A caching proxy such as Cloudflare can then be used to cache and accelerate content served by Datasette. When the database file is updated and the server is re… | Simon Willison | text/markdown | https://github.com/simonw/datasette-hashed-urls | Apache License, Version 2.0 | https://pypi.org/project/datasette-hashed-urls/ | https://pypi.org/project/datasette-hashed-urls/ | {"CI": "https://github.com/simonw/datasette-hashed-urls/actions", "Changelog": "https://github.com/simonw/datasette-hashed-urls/releases", "Homepage": "https://github.com/simonw/datasette-hashed-urls", "Issues": "https://github.com/simonw/datasette-hashed-urls/issues"} | https://pypi.org/project/datasette-hashed-urls/0.4/ | ["datasette (>=0.61.1)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.7 | 0.4 | 0 | ||||||
datasette-haversine | Datasette plugin that adds a custom SQL function for haversine distances | [] | # datasette-haversine [](https://pypi.org/project/datasette-haversine/) [](https://github.com/simonw/datasette-haversine/releases) [](https://github.com/simonw/datasette-haversine/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-haversine/blob/main/LICENSE) Datasette plugin that adds a custom SQL function for haversine distances Install this plugin in the same environment as Datasette to enable the `haversine()` SQL function. $ pip install datasette-haversine The plugin is built on top of the [haversine](https://github.com/mapado/haversine) library. ## haversine() to calculate distances ```sql select haversine(lat1, lon1, lat2, lon2); ``` This will return the distance in kilometers between the point defined by `(lat1, lon1)` and the point defined by `(lat2, lon2)`. ## Custom units By default `haversine()` returns results in km. You can pass an optional third argument to get results in a different unit: - `ft` for feet - `m` for meters - `in` for inches - `mi` for miles - `nmi` for nautical miles - `km` for kilometers (the default) ```sql select haversine(lat1, lon1, lat2, lon2, 'mi'); ``` | Simon Willison | text/markdown | https://github.com/simonw/datasette-haversine | Apache License, Version 2.0 | https://pypi.org/project/datasette-haversine/ | https://pypi.org/project/datasette-haversine/ | {"Homepage": "https://github.com/simonw/datasette-haversine"} | https://pypi.org/project/datasette-haversine/0.2/ | ["datasette", "haversine", "pytest ; extra == 'test'"] | 0.2 | 0 | |||||||
datasette-hovercards | Add preview hovercards to links in Datasette | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-hovercards [](https://pypi.org/project/datasette-hovercards/) [](https://github.com/simonw/datasette-hovercards/releases) [](https://github.com/simonw/datasette-hovercards/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-hovercards/blob/main/LICENSE) Add preview hovercards to links in Datasette ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-hovercards ## Usage Once installed, hovering over a link to a row within the Datasette interface - for example a foreign key reference on the table page - should show a hovercard with a preview of that row. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-hovercards python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-hovercards | Apache License, Version 2.0 | https://pypi.org/project/datasette-hovercards/ | https://pypi.org/project/datasette-hovercards/ | {"CI": "https://github.com/simonw/datasette-hovercards/actions", "Changelog": "https://github.com/simonw/datasette-hovercards/releases", "Homepage": "https://github.com/simonw/datasette-hovercards", "Issues": "https://github.com/simonw/datasette-hovercards/issues"} | https://pypi.org/project/datasette-hovercards/0.1a0/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.1a0 | 0 | ||||||
datasette-ics | Datasette plugin for outputting iCalendar files | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-ics [](https://pypi.org/project/datasette-ics/) [](https://github.com/simonw/datasette-ics/releases) [](https://github.com/simonw/datasette-ics/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-ics/blob/main/LICENSE) Datasette plugin that adds support for generating [iCalendar .ics files](https://tools.ietf.org/html/rfc5545) with the results of a SQL query. ## Installation Install this plugin in the same environment as Datasette to enable the `.ics` output extension. $ pip install datasette-ics ## Usage To create an iCalendar file you need to define a custom SQL query that returns a required set of columns: * `event_name` - the short name for the event * `event_dtstart` - when the event starts The following columns are optional: * `event_dtend` - when the event ends * `event_duration` - the duration of the event (use instead of `dtend`) * `event_description` - a longer description of the event * `event_uid` - a globally unique identifier for this event * `event_tzid` - the timezone for the event, e.g. `America/Chicago` A query that returns these columns can then be returned as an ics feed by adding the `.ics` extension. ## Demo [This SQL query]([https://www.rockybeaches.com/data?sql=with+inner+as+(%0D%0A++select%0D%0A++++datetime%2C%0D%0A++++substr(datetime%2C+0%2C+11)+as+date%2C%0D%0A++++mllw_feet%2C%0D%0A++++lag(mllw_feet)+over+win+as+previous_mllw_feet%2C%0D%0A++++lead(mllw_feet)+over+win+as+next_mllw_feet%0D%0A++from%0D%0A++++tide_predictions%0D%0A++where%0D%0A++++station_id+%3D+%3Astation_id%0D%0A++++and+datetime+%3E%3D+date()%0D%0A++++window+win+as+(%0D%0A++++++order+by%0D%0A++++++++datetime%0D%0A++++)%0D%0A++order+by%0D%0A++++datetime%0D%0A)%2C%0D%0Alowe… | Simon Willison | text/markdown | https://github.com/simonw/datasette-ics | Apache License, Version 2.0 | https://pypi.org/project/datasette-ics/ | https://pypi.org/project/datasette-ics/ | {"CI": "https://github.com/simonw/datasette-ics/actions", "Changelog": "https://github.com/simonw/datasette-ics/releases", "Homepage": "https://github.com/simonw/datasette-ics", "Issues": "https://github.com/simonw/datasette-ics/issues"} | https://pypi.org/project/datasette-ics/0.5.2/ | ["datasette (>=0.49)", "ics (==0.7.2)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | 0.5.2 | 0 | |||||||
datasette-import-table | Datasette plugin for importing tables from other Datasette instances | [] | # datasette-import-table [](https://pypi.org/project/datasette-import-table/) [](https://github.com/simonw/datasette-import-table/releases) [](https://github.com/simonw/datasette-import-table/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-import-table/blob/main/LICENSE) Datasette plugin for importing tables from other Datasette instances ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-import-table ## Usage Visit `/-/import-table` for the interface. Paste in the URL to a table page on another Datasette instance and click the button to import that table. By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page. The `import-table` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-import-table python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-import-table | Apache License, Version 2.0 | https://pypi.org/project/datasette-import-table/ | https://pypi.org/project/datasette-import-table/ | {"CI": "https://github.com/simonw/datasette-import-table/actions", "Changelog": "https://github.com/simonw/datasette-import-table/releases", "Homepage": "https://github.com/simonw/datasette-import-table", "Issues": "https://github.com/simonw/datasette-import-table/issues"} | https://pypi.org/project/datasette-import-table/0.3/ | ["datasette", "httpx", "sqlite-utils", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "pytest-httpx ; extra == 'test'"] | 0.3 | 0 | |||||||
datasette-indieauth | Datasette authentication using IndieAuth and RelMeAuth | [] | # datasette-indieauth [](https://pypi.org/project/datasette-indieauth/) [](https://github.com/simonw/datasette-indieauth/releases) [](https://codecov.io/gh/simonw/datasette-indieauth) [](https://github.com/simonw/datasette-indieauth/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-indieauth/blob/main/LICENSE) Datasette authentication using [IndieAuth](https://indieauth.net/). ## Demo You can try out the latest version of this plugin at [datasette-indieauth-demo.datasette.io](https://datasette-indieauth-demo.datasette.io/-/indieauth) ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-indieauth ## Usage Ensure you have a website with a domain that supports IndieAuth or RelMeAuth. The easiest way to do that is to add the following HTML to your homepage, linking to your personal GitHub profile: ```html <link href="https://github.com/simonw" rel="me"> <link rel="authorization_endpoint" href="https://indieauth.com/auth"> ``` Your GitHub profile needs to link back to your website, to prove that your GitHub account should be a valid identifier for that page. Now visit `/-/indieauth` on your Datasette instance to begin the sign-in progress. ## Actor When a user signs in using IndieAuth they will be recieve a signed `ds_actor` cookie identifying them as an [actor](https://docs.datasette.io/en/stable/authentication.html#actors) that looks like this: ```json { "me": "https://simonwillison.net/", "display": "simonwillison.net" } ``` If the IndieAuth server returned additional `"profile"` fields those will be merged int… | Simon Willison | text/markdown | https://github.com/simonw/datasette-indieauth | Apache License, Version 2.0 | https://pypi.org/project/datasette-indieauth/ | https://pypi.org/project/datasette-indieauth/ | {"CI": "https://github.com/simonw/datasette-indieauth/actions", "Changelog": "https://github.com/simonw/datasette-indieauth/releases", "Homepage": "https://github.com/simonw/datasette-indieauth", "Issues": "https://github.com/simonw/datasette-indieauth/issues"} | https://pypi.org/project/datasette-indieauth/1.2.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "pytest-httpx ; extra == 'test'", "mf2py ; extra == 'test'"] | >=3.7 | 1.2.1 | 0 | ||||||
datasette-init | Ensure specific tables and views exist on startup | [] | # datasette-init [](https://pypi.org/project/datasette-init/) [](https://github.com/simonw/datasette-init/releases) [](https://github.com/simonw/datasette-init/blob/master/LICENSE) Ensure specific tables and views exist on startup ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-init ## Usage This plugin is configured using `metadata.json` (or `metadata.yaml`). ### Creating tables Add a block like this that specifies the tables you would like to ensure exist: ```json { "plugins": { "datasette-init": { "my_database": { "tables": { "dogs": { "columns": { "id": "integer", "name": "text", "age": "integer", "weight": "float" }, "pk": "id" } } } } } } ``` Any tables that do not yet exist will be created when Datasette first starts. Valid column types are `"integer"`, `"text"`, `"float"` and `"blob"`. The `"pk"` is optional, and is used to define the primary key. To define a compound primary key (across more than one column) use a list of column names here: ```json "pk": ["id1", "id2"] ``` ### Creating views The plugin can also be used to create views: ```json { "plugins": { "datasette-init": { "my_database": { "views": { "my_view": "select 1 + 1" } } } } } ``` Each view in the ``"views"`` block will be created when the Database first starts. If a view with the same name already exists it will be replaced with the new definition. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-init python3 -mvenv venv source venv/bin/activate Or if you are usin… | Simon Willison | text/markdown | https://github.com/simonw/datasette-init | Apache License, Version 2.0 | https://pypi.org/project/datasette-init/ | https://pypi.org/project/datasette-init/ | {"CI": "https://github.com/simonw/datasette-init/actions", "Changelog": "https://github.com/simonw/datasette-init/releases", "Homepage": "https://github.com/simonw/datasette-init", "Issues": "https://github.com/simonw/datasette-init/issues"} | https://pypi.org/project/datasette-init/0.2/ | ["datasette (>=0.45)", "sqlite-utils", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 0.2 | 0 | |||||||
datasette-insert | Datasette plugin for inserting and updating data | [] | # datasette-insert [](https://pypi.org/project/datasette-insert/) [](https://github.com/simonw/datasette-insert/releases) [](https://github.com/simonw/datasette-insert/blob/master/LICENSE) Datasette plugin for inserting and updating data ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-insert This plugin should always be deployed with additional configuration to prevent unauthenticated access, see notes below. If you are trying it out on your own local machine, you can `pip install` the [datasette-insert-unsafe](https://github.com/simonw/datasette-insert-unsafe) plugin to allow access without needing to set up authentication or permissions separately. ## Inserting data and creating tables Start datasette and make sure it has a writable SQLite database attached to it. If you have not yet created a database file you can use this: datasette data.db --create The `--create` option will create a new empty `data.db` database file if it does not already exist. The plugin adds an endpoint that allows data to be inserted or updated and tables to be created by POSTing JSON data to the following URL: /-/insert/name-of-database/name-of-table The JSON should look like this: ```json [ { "id": 1, "name": "Cleopaws", "age": 5 }, { "id": 2, "name": "Pancakes", "age": 5 } ] ``` The first time data is posted to the URL a table of that name will be created if it does not aready exist, with the desired columns. You can specify which column should be used as the primary key using the `?pk=` URL argument. Here's how to POST to a database and create a new table using the Python `requests` library: ```python import requests requests.post("http://localhost:800… | Simon Willison | text/markdown | https://datasette.io/plugins/datasette-insert | Apache License, Version 2.0 | https://pypi.org/project/datasette-insert/ | https://pypi.org/project/datasette-insert/ | {"CI": "https://github.com/simonw/datasette-insert/actions", "Changelog": "https://github.com/simonw/datasette-insert/releases", "Homepage": "https://datasette.io/plugins/datasette-insert", "Issues": "https://github.com/simonw/datasette-insert/issues"} | https://pypi.org/project/datasette-insert/0.8/ | ["datasette (>=0.46)", "sqlite-utils", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "datasette-auth-tokens ; extra == 'test'"] | >=3.7 | 0.8 | 0 | ||||||
datasette-jellyfish | Datasette plugin adding SQL functions for fuzzy text matching powered by Jellyfish | [] | # datasette-jellyfish [](https://pypi.org/project/datasette-jellyfish/) [](https://github.com/simonw/datasette-jellyfish/releases) [](https://github.com/simonw/datasette-jellyfish/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-jellyfish/blob/main/LICENSE) Datasette plugin that adds custom SQL functions for fuzzy string matching, built on top of the [Jellyfish](https://github.com/jamesturk/jellyfish) Python library by James Turk and Michael Stephens. Interactive demos: * [soundex, metaphone, nysiis, match_rating_codex comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++soundex%28%3As%29%2C+%0D%0A++++metaphone%28%3As%29%2C+%0D%0A++++nysiis%28%3As%29%2C+%0D%0A++++match_rating_codex%28%3As%29&s=demo). * [distance functions comparison](https://latest-with-plugins.datasette.io/fixtures?sql=SELECT%0D%0A++++levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++damerau_levenshtein_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++hamming_distance%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++jaro_winkler_similarity%28%3As1%2C+%3As2%29%2C%0D%0A++++match_rating_comparison%28%3As1%2C+%3As2%29%3B&s1=barrack+obama&s2=barrack+h+obama) Examples: SELECT soundex("hello"); -- Outputs H400 SELECT metaphone("hello"); -- Outputs HL SELECT nysiis("hello"); -- Outputs HAL SELECT match_rating_codex("hello"); -- Outputs HLL SELECT porter_stem("running"); -- Outputs run SELECT levenshtein_distance("hello", "hello world"); -- Outputs 6 SELECT damerau_levenshtein_distance("hello", "hello world"); -- Outputs 6 SELECT hamming_distance("hello", "hello wor… | Simon Willison | text/markdown | https://datasette.io/plugins/datasette-jellyfish | Apache License, Version 2.0 | https://pypi.org/project/datasette-jellyfish/ | https://pypi.org/project/datasette-jellyfish/ | {"CI": "https://github.com/simonw/datasette-jellyfish/actions", "Changelog": "https://github.com/simonw/datasette-jellyfish/releases", "Homepage": "https://datasette.io/plugins/datasette-jellyfish", "Issues": "https://github.com/simonw/datasette-jellyfish/issues"} | https://pypi.org/project/datasette-jellyfish/1.0.1/ | ["datasette", "jellyfish (>=0.8.2)", "pytest ; extra == 'test'"] | 1.0.1 | 0 | |||||||
datasette-jq | Datasette plugin that adds custom SQL functions for executing jq expressions against JSON values | [] | # datasette-jq [](https://pypi.org/project/datasette-jq/) [](https://circleci.com/gh/simonw/datasette-jq) [](https://github.com/simonw/datasette-jq/blob/master/LICENSE) Datasette plugin that adds custom SQL functions for executing [jq](https://stedolan.github.io/jq/) expressions against JSON values. Install this plugin in the same environment as Datasette to enable the `jq()` SQL function. Usage: select jq( column_with_json, "{top_3: .classifiers[:3], v: .version}" ) See [the jq manual](https://stedolan.github.io/jq/manual/#Basicfilters) for full details of supported expression syntax. ## Interactive demo You can try this plugin out at [datasette-jq-demo.datasette.io](https://datasette-jq-demo.datasette.io/) Sample query: select package, "https://pypi.org/project/" || package || "/" as url, jq(info, "{summary: .info.summary, author: .info.author, versions: .releases|keys|reverse}") from packages [Try this query out](https://datasette-jq-demo.datasette.io/demo?sql=select+package%2C+%22https%3A%2F%2Fpypi.org%2Fproject%2F%22+%7C%7C+package+%7C%7C+%22%2F%22+as+url%2C%0D%0Ajq%28info%2C+%22%7Bsummary%3A+.info.summary%2C+author%3A+.info.author%2C+versions%3A+.releases%7Ckeys%7Creverse%7D%22%29%0D%0Afrom+packages) in the interactive demo. | Simon Willison | text/markdown | https://github.com/simonw/datasette-jq | Apache License, Version 2.0 | https://pypi.org/project/datasette-jq/ | https://pypi.org/project/datasette-jq/ | {"Homepage": "https://github.com/simonw/datasette-jq"} | https://pypi.org/project/datasette-jq/0.2.1/ | ["datasette", "pyjq", "six", "pytest ; extra == 'test'"] | 0.2.1 | 0 | |||||||
datasette-json-html | Datasette plugin for rendering HTML based on JSON values | [] | # datasette-json-html [](https://pypi.org/project/datasette-json-html/) [](https://github.com/simonw/datasette-json-html/releases) [](https://github.com/simonw/datasette-remote-metadata/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-json-html/blob/main/LICENSE) Datasette plugin for rendering HTML based on JSON values, using the [render_cell plugin hook](https://docs.datasette.io/en/stable/plugin_hooks.html#render-cell-value-column-table-database-datasette). This plugin looks for cell values that match a very specific JSON format and converts them into HTML when they are rendered by the Datasette interface. ## Links { "href": "https://simonwillison.net/", "label": "Simon Willison" } Will be rendered as an `<a href="">` link: <a href="https://simonwillison.net/">Simon Willison</a> You can set a tooltip on the link using a `"title"` key: { "href": "https://simonwillison.net/", "label": "Simon Willison", "title": "My blog" } Produces: <a href="https://simonwillison.net/" title="My blog">Simon Willison</a> You can also include a description, which will be displayed below the link. If descriptions include newlines they will be converted to `<br>` elements: select json_object( "href", "https://simonwillison.net/", "label", "Simon Willison", "description", "This can contain" || x'0a' || "newlines" ) Produces: <strong><a href="https://simonwillison.net/">Simon Willison</a></strong><br>This can contain<br>newlines * [Literal JSON link demo](https://datasette-json-html.datasette.io/demo?sql=select+%27%7B%0D%0A++++%22href%22%3A+%22https%3A%2F%2Fsimonwillison.net%2F%… | Simon Willison | text/markdown | https://datasette.io/plugins/datasette-json-html | Apache License, Version 2.0 | https://pypi.org/project/datasette-json-html/ | https://pypi.org/project/datasette-json-html/ | {"CI": "https://github.com/simonw/datasette-json-html/actions", "Changelog": "https://github.com/simonw/datasette-json-html/releases", "Homepage": "https://datasette.io/plugins/datasette-json-html", "Issues": "https://github.com/simonw/datasette-json-html/issues"} | https://pypi.org/project/datasette-json-html/1.0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 1.0.1 | 0 | |||||||
datasette-jupyterlite | JupyterLite as a Datasette plugin | [] | # datasette-jupyterlite [](https://pypi.org/project/datasette-jupyterlite/) [](https://github.com/simonw/datasette-jupyterlite/releases) [](https://github.com/simonw/datasette-jupyterlite/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-jupyterlite/blob/main/LICENSE) [JupyterLite](https://jupyterlite.readthedocs.io/en/latest/) as a Datasette plugin ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-jupyterlite ## Demo You can try out a demo of the plugin here: https://latest-with-plugins.datasette.io/jupyterlite/ Run this example code in a Pyolite notebook to pull all of the data from the [github/stars](https://latest-with-plugins.datasette.io/github/stars) table into a Pandas DataFrame: ```python import pandas, pyodide df = pandas.read_csv(pyodide.open_url( "https://latest-with-plugins.datasette.io/github/stars.csv?_labels=on&_stream=on&_size=max") ) ``` ## Usage Once installed, visit `/jupyterlite/` to access JupyterLite served from your Datasette instance. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-jupyterlite python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-jupyterlite | Apache License, Version 2.0 | https://pypi.org/project/datasette-jupyterlite/ | https://pypi.org/project/datasette-jupyterlite/ | {"CI": "https://github.com/simonw/datasette-jupyterlite/actions", "Changelog": "https://github.com/simonw/datasette-jupyterlite/releases", "Homepage": "https://github.com/simonw/datasette-jupyterlite", "Issues": "https://github.com/simonw/datasette-jupyterlite/issues"} | https://pypi.org/project/datasette-jupyterlite/0.1a1/ | ["datasette", "jupyterlite", "importlib-resources", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1a1 | 0 | ||||||
datasette-leaflet | A plugin that bundles Leaflet.js for Datasette | [] | # datasette-leaflet [](https://pypi.org/project/datasette-leaflet/) [](https://github.com/simonw/datasette-leaflet/releases) [](https://github.com/simonw/datasette-leaflet/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-leaflet/blob/main/LICENSE) Datasette plugin adding the [Leaflet](https://leafletjs.com/) JavaScript library. A growing number of Datasette plugins depend on the Leaflet JavaScript mapping library. They each have their own way of loading Leaflet, which could result in loading it multiple times (with multiple versions) if more than one plugin is installed. This library is intended to solve this problem, by providing a single plugin they can all depend on that loads Leaflet in a reusable way. Plugins that use this: - [datasette-leaflet-freedraw](https://datasette.io/plugins/datasette-leaflet-freedraw) - [datasette-leaflet-geojson](https://datasette.io/plugins/datasette-leaflet-geojson) - [datasette-cluster-map](https://datasette.io/plugins/datasette-cluster-map) ## Installation You can install this plugin like so: datasette install datasette-leaflet Usually this plugin will be a dependency of other plugins, so it should be installed automatically when you install them. ## Usage The plugin makes `leaflet.js` and `leaflet.css` available as static files. It provides two custom template variables with the URLs of those two files. - `{{ datasette_leaflet_url }}` is the URL to the JavaScript - `{{ datasette_leaflet_css_url }}` is the URL to the CSS These URLs are also made available as global JavaScript constants: - `datasette.leaflet.JAVASCRIPT_URL` - `datasette.leaflet.CSS_URL` The JavaScript is packaed as a [JavaScript module](https://developer.m… | Simon Willison | text/markdown | https://github.com/simonw/datasette-leaflet | Apache License, Version 2.0 | https://pypi.org/project/datasette-leaflet/ | https://pypi.org/project/datasette-leaflet/ | {"CI": "https://github.com/simonw/datasette-leaflet/actions", "Changelog": "https://github.com/simonw/datasette-leaflet/releases", "Homepage": "https://github.com/simonw/datasette-leaflet", "Issues": "https://github.com/simonw/datasette-leaflet/issues"} | https://pypi.org/project/datasette-leaflet/0.2.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.2.2 | 0 | ||||||
datasette-leaflet-freedraw | Draw polygons on maps in Datasette | [] | # datasette-leaflet-freedraw [](https://pypi.org/project/datasette-leaflet-freedraw/) [](https://github.com/simonw/datasette-leaflet-freedraw/releases) [](https://github.com/simonw/datasette-leaflet-freedraw/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-leaflet-freedraw/blob/main/LICENSE) Draw polygons on maps in Datasette Project background: [Drawing shapes on a map to query a SpatiaLite database](https://simonwillison.net/2021/Jan/24/drawing-shapes-spatialite/). ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-leaflet-freedraw ## Usage If a table has a SpatiaLite `geometry` column, the plugin will add a map interface to the table page allowing users to draw a shape on the map to find rows with a geometry that intersects that shape. The plugin can also work with arbitrary SQL queries. There it looks for input fields with a name of `freedraw` or that ends in `_freedraw` and replaces them with a map interface. The map interface uses the [FreeDraw](https://freedraw.herokuapp.com/) Leaflet plugin. ## Demo You can try out this plugin to run searches against the GreenInfo Network California Protected Areas Database. Here's [an example query](https://calands.datasettes.com/calands?sql=select%0D%0A++AsGeoJSON%28geometry%29%2C+*%0D%0Afrom%0D%0A++CPAD_2020a_SuperUnits%0D%0Awhere%0D%0A++PARK_NAME+like+%27%25mini%25%27+and%0D%0A++Intersects%28GeomFromGeoJSON%28%3Afreedraw%29%2C+geometry%29+%3D+1%0D%0A++and+CPAD_2020a_SuperUnits.rowid+in+%28%0D%0A++++select%0D%0A++++++rowid%0D%0A++++from%0D%0A++++++SpatialIndex%0D%0A++++where%0D%0A++++++f_table_name+%3D+%27CPAD_2020a_SuperUnits%27%0D%0A+++… | Simon Willison | text/markdown | https://github.com/simonw/datasette-leaflet-freedraw | Apache License, Version 2.0 | https://pypi.org/project/datasette-leaflet-freedraw/ | https://pypi.org/project/datasette-leaflet-freedraw/ | {"CI": "https://github.com/simonw/datasette-leaflet-freedraw/actions", "Changelog": "https://github.com/simonw/datasette-leaflet-freedraw/releases", "Homepage": "https://github.com/simonw/datasette-leaflet-freedraw", "Issues": "https://github.com/simonw/datasette-leaflet-freedraw/issues"} | https://pypi.org/project/datasette-leaflet-freedraw/0.3.1/ | ["datasette (>=0.60)", "datasette-leaflet (>=0.2)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.3.1 | 0 | ||||||
datasette-leaflet-geojson | A Datasette plugin that renders GeoJSON columns using Leaflet | [] | # datasette-leaflet-geojson [](https://pypi.org/project/datasette-leaflet-geojson/) [](https://github.com/simonw/datasette-leaflet-geojson/releases) [](https://github.com/simonw/datasette-leaflet-geojson/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-leaflet-geojson/blob/main/LICENSE) Datasette plugin that replaces any GeoJSON column values with a Leaflet map ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-leaflet-geojson ## Usage Any columns containing valid GeoJSON strings will have their contents replaced with a Leaflet map when they are displayed in the Datasette interface. ## Demo You can try this plugin out at https://calands.datasettes.com/calands/superunits_with_maps?MNG_AGENCY=Palo+Alto%2C+City+of  ## Configuration By default this plugin displays maps for the first ten rows, and shows a "Click to load map" prompt for rows past the first ten. You can change this limit using the `default_maps_to_load` plugin configuration setting. Add this to your `metadata.json`: ```json { "plugins": { "datasette-leaflet-geojson": { "default_maps_to_load": 20 } } } ``` Then run Datasette with `datasette mydb.db -m metadata.json`. | Simon Willison | text/markdown | https://github.com/simonw/datasette-leaflet-geojson | Apache License, Version 2.0 | https://pypi.org/project/datasette-leaflet-geojson/ | https://pypi.org/project/datasette-leaflet-geojson/ | {"Homepage": "https://github.com/simonw/datasette-leaflet-geojson"} | https://pypi.org/project/datasette-leaflet-geojson/0.8/ | ["datasette (>=0.54)", "datasette-leaflet (>=0.2)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | 0.8 | 0 | |||||||
datasette-mask-columns | Datasette plugin that masks specified database columns | [] | # datasette-mask-columns [](https://pypi.org/project/datasette-mask-columns/) [](https://github.com/simonw/datasette-mask-columns/releases) [](https://github.com/simonw/datasette-mask-columns/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-mask-columns/blob/main/LICENSE) Datasette plugin that masks specified database columns ## Installation pip install datasette-mask-columns This depends on plugin hook changes in a not-yet released branch of Datasette. See [issue #678](https://github.com/simonw/datasette/issues/678) for details. ## Usage In your `metadata.json` file add a section like this describing the database and table in which you wish to mask columns: ```json { "databases": { "my-database": { "plugins": { "datasette-mask-columns": { "users": ["password"] } } } } } ``` All SQL queries against the `users` table in `my-database.db` will now return `null` for the `password` column, no matter what value that column actually holds. The table page for `users` will display the text `REDACTED` in the masked column. This visual hint will only be available on the table page; it will not display his text for arbitrary queries against the table. | Simon Willison | text/markdown | https://github.com/simonw/datasette-mask-columns | Apache License, Version 2.0 | https://pypi.org/project/datasette-mask-columns/ | https://pypi.org/project/datasette-mask-columns/ | {"Homepage": "https://github.com/simonw/datasette-mask-columns"} | https://pypi.org/project/datasette-mask-columns/0.2.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | 0.2.1 | 0 | |||||||
datasette-media | Datasette plugin for serving media based on a SQL query | [] | # datasette-media [](https://pypi.org/project/datasette-media/) [](https://github.com/simonw/datasette-media/releases) [](https://circleci.com/gh/simonw/datasette-media) [](https://github.com/simonw/datasette-media/blob/master/LICENSE) Datasette plugin for serving media based on a SQL query. Use this when you have a database table containing references to files on disk - or binary content stored in BLOB columns - that you would like to be able to serve to your users. ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-media ### HEIC image support Modern iPhones save their photos using the [HEIC image format](https://en.wikipedia.org/wiki/High_Efficiency_Image_File_Format). Processing these images requires an additional dependency, [pyheif](https://pypi.org/project/pyheif/). You can include this dependency by running: $ pip install datasette-media[heif] ## Usage You can use this plugin to configure Datasette to serve static media based on SQL queries to an underlying database table. Media will be served from URLs that start with `/-/media/`. The full URL to each media asset will look like this: /-/media/type-of-media/media-key `type-of-media` will correspond to a configured SQL query, and might be something like `photo`. `media-key` will be an identifier that is used as part of the underlying SQL query to find which file should be served. ### Serving static files from disk The following ``metadata.json`` configuration will cause this plugin to serve files from disk, based on queries to a database table called `apple_photos`. ```json { "plugins": { "datasette-media": { "photo": { "sql": "select … | Simon Willison | text/markdown | https://github.com/simonw/datasette-media | Apache License, Version 2.0 | https://pypi.org/project/datasette-media/ | https://pypi.org/project/datasette-media/ | {"CI": "https://app.circleci.com/pipelines/github/simonw/datasette-media", "Changelog": "https://github.com/simonw/datasette-media/releases", "Homepage": "https://github.com/simonw/datasette-media", "Issues": "https://gitlab.com/simonw/datasette-media/issues"} | https://pypi.org/project/datasette-media/0.5/ | ["datasette (>=0.44)", "Pillow (>=7.1.2)", "httpx (>=0.13.3)", "pyheif (>=0.4) ; extra == 'heif'", "asgiref ; extra == 'test'", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'", "pytest-httpx (>=0.4.0) ; extra == 'test'"] | 0.5 | 0 | |||||||
datasette-mp3-audio | Turn .mp3 URLs into an audio player in the Datasette interface | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-mp3-audio [](https://pypi.org/project/datasette-mp3-audio/) [](https://github.com/simonw/datasette-mp3-audio/releases) [](https://github.com/simonw/datasette-mp3-audio/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-mp3-audio/blob/main/LICENSE) Turn .mp3 URLs into an audio player in the Datasette interface ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-mp3-audio ## Demo Try this plugin at [https://scotrail.datasette.io/scotrail/announcements](https://scotrail.datasette.io/scotrail/announcements) The demo uses ScotRail train announcements from [matteason/scotrail-announcements-june-2022](https://github.com/matteason/scotrail-announcements-june-2022). ## Usage Once installed, any cells with a value that ends in `.mp3` and starts with either `http://` or `/` or `https://` will be turned into an embedded HTML audio element like this: ```html <audio controls src="... value ...">Audio not supported</audio> ``` A "Play X MP3s on this page" button will be added to athe top of any table page listing more than one MP3. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-mp3-audio python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-mp3-audio | Apache License, Version 2.0 | https://pypi.org/project/datasette-mp3-audio/ | https://pypi.org/project/datasette-mp3-audio/ | {"CI": "https://github.com/simonw/datasette-mp3-audio/actions", "Changelog": "https://github.com/simonw/datasette-mp3-audio/releases", "Homepage": "https://github.com/simonw/datasette-mp3-audio", "Issues": "https://github.com/simonw/datasette-mp3-audio/issues"} | https://pypi.org/project/datasette-mp3-audio/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.7 | 0.2 | 0 | ||||||
datasette-multiline-links | Make multiple newline separated URLs clickable in Datasette | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-multiline-links [](https://pypi.org/project/datasette-multiline-links/) [](https://github.com/simonw/datasette-multiline-links/releases) [](https://github.com/simonw/datasette-multiline-links/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-multiline-links/blob/main/LICENSE) Make multiple newline separated URLs clickable in Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-multiline-links ## Usage Once installed, if a cell has contents like this: ``` https://example.com Not a link https://google.com ``` It will be rendered as: ```html <a href="https://example.com">https://example.com</a> Not a link <a href="https://google.com">https://google.com</a> ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-multiline-links python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-multiline-links | Apache License, Version 2.0 | https://pypi.org/project/datasette-multiline-links/ | https://pypi.org/project/datasette-multiline-links/ | {"CI": "https://github.com/simonw/datasette-multiline-links/actions", "Changelog": "https://github.com/simonw/datasette-multiline-links/releases", "Homepage": "https://github.com/simonw/datasette-multiline-links", "Issues": "https://github.com/simonw/datasette-multiline-links/issues"} | https://pypi.org/project/datasette-multiline-links/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-nteract-data-explorer | automatic visual data explorer for datasette | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-nteract-data-explorer [](https://pypi.org/project/datasette-nteract-data-explorer/) [](https://github.com/hydrosquall/datasette-nteract-data-explorer/releases) [](https://github.com/hydrosquall/datasette-nteract-data-explorer/actions?query=workflow%3ATest) [](https://github.com/hydrosquall/datasette-nteract-data-explorer/blob/main/LICENSE) An automatic data visualization plugin for the [Datasette](https://datasette.io/) ecosystem. See your dataset from multiple views with an easy-to-use, customizable menu-based interface. ## Demo Try the [live demo](https://datasette-nteract-data-explorer.vercel.app/happy_planet_index/hpi_cleaned?_size=137)  _Running Datasette with the Happy Planet Index dataset_ ## Installation Install this plugin in the same Python environment as Datasette. ```bash datasette install datasette-nteract-data-explorer ``` ## Usage - Click "View in Data Explorer" to expand the visualization panel - Click the icons on the right side to change the visualization type. - Use the menus underneath the graphing area to configure your graph (e.g. change which columns to graph, colors to use, etc) - Use "advanced settings" mode to override the inferred column types. For example, you may want to treat a number as a "string" to be able to use it as a category. - See a [live demo](https://data-explorer.nteract.io/) of the original Nteract data-explorer component used in isolation. You can run a minimal demo after the installation step ```bash datasette -i demo/happy_planet_i… | Cameron Yick | text/markdown | https://github.com/hydrosquall/datasette-nteract-data-explorer | Apache License, Version 2.0 | https://pypi.org/project/datasette-nteract-data-explorer/ | https://pypi.org/project/datasette-nteract-data-explorer/ | {"CI": "https://github.com/hydrosquall/datasette-nteract-data-explorer/actions", "Changelog": "https://github.com/hydrosquall/datasette-nteract-data-explorer/releases", "Homepage": "https://github.com/hydrosquall/datasette-nteract-data-explorer", "Issues": "https://github.com/hydrosquall/datasette-nteract-data-explorer/issues"} | https://pypi.org/project/datasette-nteract-data-explorer/0.5.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.5.1 | 0 | ||||||
datasette-packages | Show a list of currently installed Python packages | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-packages [](https://pypi.org/project/datasette-packages/) [](https://github.com/simonw/datasette-packages/releases) [](https://github.com/simonw/datasette-packages/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-packages/blob/main/LICENSE) Show a list of currently installed Python packages ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-packages ## Usage Visit `/-/packages` to see a list of installed Python packages. Visit `/-/packages.json` to get that back as JSON. ## Demo The output of this plugin can be seen here: - https://latest-with-plugins.datasette.io/-/packages - https://latest-with-plugins.datasette.io/-/packages.json ## With datasette-graphql if you have version 2.1 or higher of the [datasette-graphql](https://datasette.io/plugins/datasette-graphql) plugin installed you can also query the list of packages using this GraphQL query: ```graphql { packages { name version } } ``` [Demo of this query](https://latest-with-plugins.datasette.io/graphql?query=%7B%0A%20%20%20%20packages%20%7B%0A%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20version%0A%20%20%20%20%7D%0A%7D). ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-packages python3 -mvenv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-packages | Apache License, Version 2.0 | https://pypi.org/project/datasette-packages/ | https://pypi.org/project/datasette-packages/ | {"CI": "https://github.com/simonw/datasette-packages/actions", "Changelog": "https://github.com/simonw/datasette-packages/releases", "Homepage": "https://github.com/simonw/datasette-packages", "Issues": "https://github.com/simonw/datasette-packages/issues"} | https://pypi.org/project/datasette-packages/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "datasette-graphql (>=2.1) ; extra == 'test'"] | >=3.7 | 0.2 | 0 | ||||||
datasette-permissions-sql | Datasette plugin for configuring permission checks using SQL queries | [] | # datasette-permissions-sql [](https://pypi.org/project/datasette-permissions-sql/) [](https://circleci.com/gh/simonw/datasette-permissions-sql) [](https://github.com/simonw/datasette-permissions-sql/blob/master/LICENSE) Datasette plugin for configuring permission checks using SQL queries ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-permissions-sql ## Usage First, read up on how Datasette's [authentication and permissions system](https://datasette.readthedocs.io/en/latest/authentication.html) works. This plugin lets you define rules containing SQL queries that are executed to see if the currently authenticated actor has permission to perform certain actions. Consider a canned query which authenticated users should only be able to execute if a row in the `users` table says that they are a member of staff. That `users` table in the `mydatabase.db` database could look like this: | id | username | is_staff | |--|--------|--------| | 1 | cleopaws | 0 | | 2 | simon | 1 | Authenticated users have an `actor` that looks like this: ```json { "id": 2, "username": "simon" } ``` To configure the canned query to only be executable by staff users, add the following to your `metadata.json`: ```json { "plugins": { "datasette-permissions-sql": [ { "action": "view-query", "resource": ["mydatabase", "promote_to_staff"], "sql": "SELECT * FROM users WHERE is_staff = 1 AND id = :actor_id" } ] }, "databases": { "mydatabase": { "queries": { "promote_to_staff": { "sql": "UPDATE users SET is is_staff=1 WHERE id=:id", "write": true } } }… | Simon Willison | text/markdown | https://github.com/simonw/datasette-permissions-sql | Apache License, Version 2.0 | https://pypi.org/project/datasette-permissions-sql/ | https://pypi.org/project/datasette-permissions-sql/ | {"Homepage": "https://github.com/simonw/datasette-permissions-sql"} | https://pypi.org/project/datasette-permissions-sql/0.3a0/ | ["datasette (>=0.44)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils (~=2.0) ; extra == 'test'"] | 0.3a0 | 0 | |||||||
datasette-placekey | SQL functions for working with placekeys | [] | # datasette-placekey [](https://pypi.org/project/datasette-placekey/) [](https://github.com/simonw/datasette-placekey/releases) [](https://github.com/simonw/datasette-placekey/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-placekey/blob/main/LICENSE) SQL functions for working with [placekeys](https://www.placekey.io/). ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-placekey ## Usage The following SQL functions are exposed - [documentation here](https://placekey.github.io/placekey-py/placekey.html#module-placekey.placekey). ```sql select geo_to_placekey(33.0896104,129.7900839), placekey_to_geo('@6nh-nhh-kvf'), placekey_to_geo_latitude('@6nh-nhh-kvf'), placekey_to_geo_longitude('@6nh-nhh-kvf'), placekey_to_h3('@6nh-nhh-kvf'), h3_to_placekey('8a30d94e4c87fff'), placekey_to_geojson('@6nh-nhh-kvf'), placekey_to_wkt('@6nh-nhh-kvf'), placekey_format_is_valid('@6nh-nhh-kvf'); ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-placekey python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-placekey | Apache License, Version 2.0 | https://pypi.org/project/datasette-placekey/ | https://pypi.org/project/datasette-placekey/ | {"CI": "https://github.com/simonw/datasette-placekey/actions", "Changelog": "https://github.com/simonw/datasette-placekey/releases", "Homepage": "https://github.com/simonw/datasette-placekey", "Issues": "https://github.com/simonw/datasette-placekey/issues"} | https://pypi.org/project/datasette-placekey/0.1/ | ["datasette", "placekey", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.1 | 0 | ||||||
datasette-pretty-json | Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays | [] | # datasette-pretty-json [](https://pypi.org/project/datasette-pretty-json/) [](https://github.com/simonw/datasette-pretty-json/releases) [](https://github.com/simonw/datasette-pretty-json/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-pretty-json/blob/main/LICENSE) [Datasette](https://github.com/simonw/datasette) plugin that pretty-prints any column values that are valid JSON objects or arrays. You may also be interested in [datasette-json-html](https://github.com/simonw/datasette-json-html). | Simon Willison | text/markdown | https://github.com/simonw/datasette-pretty-json | Apache License, Version 2.0 | https://pypi.org/project/datasette-pretty-json/ | https://pypi.org/project/datasette-pretty-json/ | {"Homepage": "https://github.com/simonw/datasette-pretty-json"} | https://pypi.org/project/datasette-pretty-json/0.2.2/ | ["datasette", "pytest ; extra == 'test'"] | 0.2.2 | 0 | |||||||
datasette-pretty-traces | Prettier formatting for ?_trace=1 traces | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-pretty-traces [](https://pypi.org/project/datasette-pretty-traces/) [](https://github.com/simonw/datasette-pretty-traces/releases) [](https://github.com/simonw/datasette-pretty-traces/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-pretty-traces/blob/main/LICENSE) Prettier formatting for `?_trace=1` traces ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-pretty-traces ## Usage Once installed, run Datasette using `--setting trace_debug 1`: datasette fixtures.db --setting trace_debug 1 Then navigate to any page and add `?_trace=` to the URL: http://localhost:8001/?_trace=1 The plugin will scroll you down the page to the visualized trace information. ## Demo You can try out the demo here: - [/?_trace=1](https://latest-with-plugins.datasette.io/?_trace=1) tracing the homepage - [/github/commits?_trace=1](https://latest-with-plugins.datasette.io/github/commits?_trace=1) tracing a table page ## Screenshot  ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-pretty-traces python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-pretty-traces | Apache License, Version 2.0 | https://pypi.org/project/datasette-pretty-traces/ | https://pypi.org/project/datasette-pretty-traces/ | {"CI": "https://github.com/simonw/datasette-pretty-traces/actions", "Changelog": "https://github.com/simonw/datasette-pretty-traces/releases", "Homepage": "https://github.com/simonw/datasette-pretty-traces", "Issues": "https://github.com/simonw/datasette-pretty-traces/issues"} | https://pypi.org/project/datasette-pretty-traces/0.4/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.4 | 0 | ||||||
datasette-psutil | Datasette plugin adding a /-/psutil debugging endpoint | [] | # datasette-psutil [](https://pypi.org/project/datasette-psutil/) [](https://circleci.com/gh/simonw/datasette-psutil) [](https://github.com/simonw/datasette-psutil/blob/master/LICENSE) Datasette plugin adding a `/-/psutil` debugging endpoint ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-psutil ## Usage Visit `/-/psutil` on your Datasette instance to see various information provided by [psutil](https://psutil.readthedocs.io/). ## Example You can visit a live demo here: https://datasette-psutil-demo-j7hipcg4aq-uw.a.run.app/-/psutil Sample output: ``` process.open_files() popenfile(path='/tmp/fixtures.db', fd=6) process.connections() pconn(fd=5, family=<AddressFamily.AF_INET: 2>, type=<SocketKind.SOCK_STREAM: 1>, laddr=addr(ip='169.254.8.130', port=8080), raddr=addr(ip='169.254.8.129', port=52621), status='ESTABLISHED') pconn(fd=3, family=<AddressFamily.AF_INET: 2>, type=<SocketKind.SOCK_STREAM: 1>, laddr=addr(ip='0.0.0.0', port=8080), raddr=(), status='LISTEN') process.memory_info() pmem(rss=56827904, vms=242540544, shared=0, text=0, lib=0, data=0, dirty=0) process.cmdline() '/usr/local/bin/python' '/usr/local/bin/datasette' 'serve' '--host' '0.0.0.0' '-i' 'fixtures.db' '--cors' '--inspect-file' 'inspect-data.json' '--port' '8080' process.parents() psutil.Process(pid=1, name='sh', started='23:19:29') process.threads() pthread(id=2, user_time=7.27, system_time=3.99) pthread(id=4, user_time=0.01, system_time=0.0) pthread(id=5, user_time=0.0, system_time=0.0) pthread(id=6, user_time=0.0, system_time=0.0) pthread(id=7, user_time=0.0, system_time=0.0) pthread(id=8, user_time=0.0, system_time=0.0) psutil.getloadavg() (0.0, 0.0, 0.0) psutil.cpu_times(True) scputimes(user=0.0, nice=0.0, system=0.0, idle… | Simon Willison | text/markdown | https://github.com/simonw/datasette-psutil | Apache License, Version 2.0 | https://pypi.org/project/datasette-psutil/ | https://pypi.org/project/datasette-psutil/ | {"Homepage": "https://github.com/simonw/datasette-psutil"} | https://pypi.org/project/datasette-psutil/0.2/ | ["datasette (>=0.44)", "psutil", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 0.2 | 0 | |||||||
datasette-public | Make specific Datasette tables visible to the public | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-public [](https://pypi.org/project/datasette-public/) [](https://github.com/simonw/datasette-public/releases) [](https://github.com/simonw/datasette-public/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-public/blob/main/LICENSE) Make specific Datasette tables visible to the public ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-public ## Usage Any tables listed in the `_public_tables` table will be visible to the public, even if the rest of the Datasette instance does not allow anonymous access. The root user (and any user with the new `public-tables` permission) will get a new option in the table action menu allowing them to toggle a table between public and private. Installing this plugin also causes `allow-sql` permission checks to fall back to checking if the user has access to the entire database. This is to avoid users with access to a single public table being able to access data from other tables using the `?_where=` query string parameter. ## Configuration This plugin creates a new table in one of your databases called `_public_tables`. This table defaults to being created in the first database passed to Datasette. To create it in a different named database, use this plugin configuration: ```json { "plugins": { "datasette-public": { "database": "database_to_create_table_in" } } } ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-public python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the test… | Simon Willison | text/markdown | https://github.com/simonw/datasette-public | Apache License, Version 2.0 | https://pypi.org/project/datasette-public/ | https://pypi.org/project/datasette-public/ | {"CI": "https://github.com/simonw/datasette-public/actions", "Changelog": "https://github.com/simonw/datasette-public/releases", "Homepage": "https://github.com/simonw/datasette-public", "Issues": "https://github.com/simonw/datasette-public/issues"} | https://pypi.org/project/datasette-public/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.2 | 0 | ||||||
datasette-publish-fly | Datasette plugin for publishing data using Fly | [] | # datasette-publish-fly [](https://pypi.org/project/datasette-publish-fly/) [](https://github.com/simonw/datasette-publish-fly/releases) [](https://github.com/simonw/datasette-publish-fly/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-publish-fly/blob/main/LICENSE) [Datasette](https://datasette.io/) plugin for deploying Datasette instances to [Fly.io](https://fly.io/). Project background: [Using SQLite and Datasette with Fly Volumes](https://simonwillison.net/2022/Feb/15/fly-volumes/) ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-publish-fly ## Deploying read-only data First, install the `flyctl` command-line tool by [following their instructions](https://fly.io/docs/getting-started/installing-flyctl/). Run `flyctl auth signup` to create an account there, or `flyctl auth login` if you already have one. You can now use `datasette publish fly` to publish one or more SQLite database files: datasette publish fly my-database.db --app="my-data-app" The argument you pass to `--app` will be used for the URL of your application: `my-data-app.fly.dev`. To update an application, run the publish command passing the same application name to the `--app` option. Fly have [a free tier](https://fly.io/docs/about/pricing/#free-allowances), beyond which they will charge you monthly for each application you have live. Details of their pricing can be [found on their site](https://fly.io/docs/pricing/). Your application will be deployed at `https://your-app-name.fly.io/` - be aware that it may take several minutes to start working the first time you deploy it. ## Using Fly volumes for writable databases… | Simon Willison | text/markdown | https://github.com/simonw/datasette-publish-fly | Apache License, Version 2.0 | https://pypi.org/project/datasette-publish-fly/ | https://pypi.org/project/datasette-publish-fly/ | {"CI": "https://github.com/simonw/datasette-publish-fly/actions", "Changelog": "https://github.com/simonw/datasette-publish-fly/releases", "Homepage": "https://github.com/simonw/datasette-publish-fly", "Issues": "https://github.com/simonw/datasette-publish-fly/issues"} | https://pypi.org/project/datasette-publish-fly/1.2/ | ["datasette (>=0.60.2)", "pytest ; extra == 'test'", "pytest-mock ; extra == 'test'", "cogapp ; extra == 'test'"] | 1.2 | 0 | |||||||
datasette-publish-vercel | Datasette plugin for publishing data using Vercel | [] | # datasette-publish-vercel [](https://pypi.org/project/datasette-publish-vercel/) [](https://github.com/simonw/datasette-publish-vercel/releases) [](https://github.com/simonw/datasette-publish-vercel/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-publish-vercel/blob/main/LICENSE) Datasette plugin for publishing data using [Vercel](https://vercel.com/). ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-publish-vercel ## Usage First, install the Vercel CLI tool by [following their instructions](https://vercel.com/download). Run `vercel login` to login to (or create) an account. Now you can use `datasette publish vercel` to publish your data: datasette publish vercel my-database.db --project=my-database The `--project` argument is required - it specifies the project name that should be used for your deployment. This will be used as part of the deployment's URL. ### Other options * `--no-prod` deploys to the project without updating the "production" URL alias to point to that new deployment. Without that option all deploys go directly to production. * `--debug` enables the Vercel CLI debug output. * `--token` allows you to pass a Now authentication token, rather than needing to first run `now login` to configure the tool. Tokens can be created in the Vercel web dashboard under Account Settings -> Tokens. * `--public` runs `vercel --public` to publish the application source code at `/_src` e.g. https://datasette-public.now.sh/_src and make recent logs visible at `/_logs` e.g. https://datasette-public.now.sh/_logs * `--generate-dir` - by default this tool generates a new Vercel app in a… | Simon Willison | text/markdown | https://github.com/simonw/datasette-publish-vercel | Apache License, Version 2.0 | https://pypi.org/project/datasette-publish-vercel/ | https://pypi.org/project/datasette-publish-vercel/ | {"CI": "https://github.com/simonw/datasette-publish-vercel/actions", "Changelog": "https://github.com/simonw/datasette-publish-vercel/releases", "Homepage": "https://github.com/simonw/datasette-publish-vercel", "Issues": "https://github.com/simonw/datasette-publish-vercel/issues"} | https://pypi.org/project/datasette-publish-vercel/0.14.2/ | ["datasette (>=0.59)", "pytest ; extra == 'test'"] | 0.14.2 | 0 | |||||||
datasette-pyinstrument | Use pyinstrument to analyze Datasette page performance | [] | # datasette-pyinstrument [](https://pypi.org/project/datasette-pyinstrument/) [](https://github.com/simonw/datasette-pyinstrument/releases) [](https://github.com/simonw/datasette-pyinstrument/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-pyinstrument/blob/main/LICENSE) Use pyinstrument to analyze Datasette page performance ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-pyinstrument ## Usage Once installed, adding `?_pyinstrument=1` to any URL within Datasette will replace the output of that page with the pyinstrument profiler results for it. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-pyinstrument python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-pyinstrument | Apache License, Version 2.0 | https://pypi.org/project/datasette-pyinstrument/ | https://pypi.org/project/datasette-pyinstrument/ | {"CI": "https://github.com/simonw/datasette-pyinstrument/actions", "Changelog": "https://github.com/simonw/datasette-pyinstrument/releases", "Homepage": "https://github.com/simonw/datasette-pyinstrument", "Issues": "https://github.com/simonw/datasette-pyinstrument/issues"} | https://pypi.org/project/datasette-pyinstrument/0.1/ | ["datasette", "pyinstrument", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-query-files | Write Datasette canned queries as plain SQL files | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-query-files [](https://pypi.org/project/datasette-query-files/) [](https://github.com/eyeseast/datasette-query-files/releases) [](https://github.com/eyeseast/datasette-query-files/actions?query=workflow%3ATest) [](https://github.com/eyeseast/datasette-query-files/blob/main/LICENSE) Write Datasette canned queries as plain SQL files. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-query-files Or using `pip` or `pipenv`: pip install datasette-query-files pipenv install datasette-query-files ## Usage This plugin will look for [canned queries](https://docs.datasette.io/en/stable/sql_queries.html#canned-queries) in the filesystem, in addition any defined in metadata. Let's say you're working in a directory called `project-directory`, with a database file called `my-project.db`. Start by creating a `queries` directory with a `my-project` directory inside it. Any SQL file inside that `my-project` folder will become a canned query that can be run on the `my-project` database. If you have a `query-name.sql` file and a `query-name.json` (or `query-name.yml`) file in the same directory, the JSON file will be used as query metadata. ``` project-directory/ my-project.db queries/ my-project/ query-name.sql # a query query-name.yml # query metadata ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-query-files python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Chris Amico | text/markdown | https://github.com/eyeseast/datasette-query-files | Apache License, Version 2.0 | https://pypi.org/project/datasette-query-files/ | https://pypi.org/project/datasette-query-files/ | {"CI": "https://github.com/eyeseast/datasette-query-files/actions", "Changelog": "https://github.com/eyeseast/datasette-query-files/releases", "Homepage": "https://github.com/eyeseast/datasette-query-files", "Issues": "https://github.com/eyeseast/datasette-query-files/issues"} | https://pypi.org/project/datasette-query-files/0.1.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1.1 | 0 | ||||||
datasette-query-history | Datasette plugin that keeps a list of the queries you've run and lets you rerun them. | [] | # datasette-query-history [](https://pypi.org/project/datasette-query-history/) [](https://github.com/bretwalker/datasette-query-history/releases) [](https://github.com/bretwalker/datasette-query-history/actions?query=workflow%3ATest) [](https://github.com/bretwalker/datasette-query-history/blob/main/LICENSE) Datasette plugin that keeps a list of the queries you've run and lets you rerun them. ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-query-history ## Usage Click the `Query History` button on the SQL editor page to see previous queries. Click the ⬆︎ button to replace the current query with a previous query. Click the `Clear Query History` button to clear the list previous queries. <img src="https://raw.githubusercontent.com/bretwalker/datasette-query-history/main/docs/datasette-query-history-example1.png" width="350px" alt="Screenshot of plugin"> ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-query-history python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Bret Walker | text/markdown | https://github.com/bretwalker/datasette-query-history | Apache License, Version 2.0 | https://pypi.org/project/datasette-query-history/ | https://pypi.org/project/datasette-query-history/ | {"CI": "https://github.com/bretwalker/datasette-query-history/actions", "Changelog": "https://github.com/bretwalker/datasette-query-history/releases", "Homepage": "https://github.com/bretwalker/datasette-query-history", "Issues": "https://github.com/bretwalker/datasette-query-history/issues"} | https://pypi.org/project/datasette-query-history/0.2.3/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.2.3 | 0 | ||||||
datasette-query-links | Turn SELECT queries returned by a query into links to execute them | [] | # datasette-query-links [](https://pypi.org/project/datasette-query-links/) [](https://github.com/simonw/datasette-query-links/releases) [](https://github.com/simonw/datasette-query-links/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-query-links/blob/main/LICENSE) Turn SELECT queries returned by a query into links to execute them ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-query-links ## Usage This is an experimental plugin, requiring Datasette 0.59a1 or higher. Any SQL query that returns a value that itself looks like a valid SQL query will be converted into a link to execute that SQL query when it is displayed in the Datasette interface. These links will only show for valid SQL query - if a SQL query would return an error it will not be turned into a link. ## Demo * [Here's an example query](https://latest-with-plugins.datasette.io/fixtures?sql=select%0D%0A++%27select+*+from+%5Bfacetable%5D%27+as+query%0D%0Aunion%0D%0Aselect%0D%0A++%27select+sqlite_version()%27%0D%0Aunion%0D%0Aselect%0D%0A++%27select+this+is+invalid+SQL+so+will+not+be+linked%27) showing the plugin in action. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-query-links python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-query-links | Apache License, Version 2.0 | https://pypi.org/project/datasette-query-links/ | https://pypi.org/project/datasette-query-links/ | {"CI": "https://github.com/simonw/datasette-query-links/actions", "Changelog": "https://github.com/simonw/datasette-query-links/releases", "Homepage": "https://github.com/simonw/datasette-query-links", "Issues": "https://github.com/simonw/datasette-query-links/issues"} | https://pypi.org/project/datasette-query-links/0.1.2/ | ["datasette (>=0.59a1)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "beautifulsoup4 ; extra == 'test'"] | >=3.6 | 0.1.2 | 0 | ||||||
datasette-redirect-forbidden | Redirect forbidden requests to a login page | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-redirect-forbidden [](https://pypi.org/project/datasette-redirect-forbidden/) [](https://github.com/simonw/datasette-redirect-forbidden/releases) [](https://github.com/simonw/datasette-redirect-forbidden/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-redirect-forbidden/blob/main/LICENSE) Redirect forbidden requests to a login page ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-redirect-forbidden ## Usage Add the following to your `metadata.yml` (or `metadata.json`) file to configure the plugin: ```yaml plugins: datasette-redirect-forbidden: redirect_to: /-/login ``` Any 403 forbidden pages will redirect to the specified page. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-redirect-forbidden python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-redirect-forbidden | Apache License, Version 2.0 | https://pypi.org/project/datasette-redirect-forbidden/ | https://pypi.org/project/datasette-redirect-forbidden/ | {"CI": "https://github.com/simonw/datasette-redirect-forbidden/actions", "Changelog": "https://github.com/simonw/datasette-redirect-forbidden/releases", "Homepage": "https://github.com/simonw/datasette-redirect-forbidden", "Issues": "https://github.com/simonw/datasette-redirect-forbidden/issues"} | https://pypi.org/project/datasette-redirect-forbidden/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.1 | 0 | ||||||
datasette-redirect-to-https | Datasette plugin that redirects all non-https requests to https | [] | # datasette-redirect-to-https [](https://pypi.org/project/datasette-redirect-to-https/) [](https://github.com/simonw/datasette-redirect-to-https/releases) [](https://github.com/simonw/datasette-redirect-to-https/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-redirect-to-https/blob/main/LICENSE) Datasette plugin that redirects all non-https requests to https ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-redirect-to-https ## Usage Once installed, incoming GET requests to the `http://` protocol will be 301 redirected to the `https://` equivalent page. HTTP verbs other than GET will get a 405 Method Not Allowed HTTP error. ## Configuration Some hosting providers handle HTTPS for you, passing requests back to your application server over HTTP. For this plugin to work correctly, you need to detect that they the original incoming request came in over HTTP. Hosting providers like this often set an additional HTTP header such as `x-forwarded-proto: http` to let you know the original protocol. You can configure `datasette-redirect-to-https` to respect this header using the following plugin configuration in `metadata.json`: ```json { "plugins": { "datasette-redirect-to-https": { "if_headers": { "x-forwarded-proto": "http" } } } } ``` The above example will redirect to `https://` if the incoming request has a `x-forwarded-proto: http` request header. If multiple `if_headers` are listed, the redirect will occur if any of them match. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd … | Simon Willison | text/markdown | https://github.com/simonw/datasette-redirect-to-https | Apache License, Version 2.0 | https://pypi.org/project/datasette-redirect-to-https/ | https://pypi.org/project/datasette-redirect-to-https/ | {"CI": "https://github.com/simonw/datasette-redirect-to-https/actions", "Changelog": "https://github.com/simonw/datasette-redirect-to-https/releases", "Homepage": "https://github.com/simonw/datasette-redirect-to-https", "Issues": "https://github.com/simonw/datasette-redirect-to-https/issues"} | https://pypi.org/project/datasette-redirect-to-https/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "asgi-lifespan ; extra == 'test'"] | >=3.6 | 0.2 | 0 | ||||||
datasette-remote-metadata | Periodically refresh Datasette metadata from a remote URL | [] | # datasette-remote-metadata [](https://pypi.org/project/datasette-remote-metadata/) [](https://github.com/simonw/datasette-remote-metadata/releases) [](https://github.com/simonw/datasette-remote-metadata/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-remote-metadata/blob/main/LICENSE) Periodically refresh Datasette metadata from a remote URL ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-remote-metadata ## Usage Add the following to your `metadata.json`: ```json { "plugins": { "datasette-remote-metadata": { "url": "https://example.com/remote-metadata.yml" } } } ``` The plugin will fetch the specified metadata from that URL at startup and combine it with any existing metadata. You can use a URL to either a JSON file or a YAML file. It will periodically refresh that metadata - by default every 30 seconds, unless you specify an alternative `"ttl"` value in the plugin configuration. ## Configuration Available configuration options are as follows: - `"url"` - the URL to retrieve remote metadata from. Can link to a JSON or a YAML file. - `"ttl"` - integer value in secords: how frequently should the script check for fresh metadata. Defaults to 30 seconds. - `"headers"` - a dictionary of additional request headers to send. - `"cachebust"` - if true, a random `?0.29508` value will be added to the query string of the remote metadata to bust any intermediary caches. This example `metadata.json` configuration refreshes every 10 seconds, uses cache busting and sends an `Authorization: Bearer xyz` header with the request: ```json { "plugins": { … | Simon Willison | text/markdown | https://github.com/simonw/datasette-remote-metadata | Apache License, Version 2.0 | https://pypi.org/project/datasette-remote-metadata/ | https://pypi.org/project/datasette-remote-metadata/ | {"CI": "https://github.com/simonw/datasette-remote-metadata/actions", "Changelog": "https://github.com/simonw/datasette-remote-metadata/releases", "Homepage": "https://github.com/simonw/datasette-remote-metadata", "Issues": "https://github.com/simonw/datasette-remote-metadata/issues"} | https://pypi.org/project/datasette-remote-metadata/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "pytest-httpx ; extra == 'test'"] | >=3.6 | 0.1 | 0 | ||||||
datasette-render-binary | Datasette plugin for rendering binary data | [] | # datasette-render-binary [](https://pypi.org/project/datasette-render-binary/) [](https://circleci.com/gh/simonw/datasette-render-binary) [](https://github.com/simonw/datasette-render-binary/blob/master/LICENSE) Datasette plugin for rendering binary data. Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-render-binary Binary data in cells will now be rendered as a mixture of characters and octets.  | Simon Willison | text/markdown | https://github.com/simonw/datasette-render-binary | Apache License, Version 2.0 | https://pypi.org/project/datasette-render-binary/ | https://pypi.org/project/datasette-render-binary/ | {"Homepage": "https://github.com/simonw/datasette-render-binary"} | https://pypi.org/project/datasette-render-binary/0.3/ | ["datasette", "filetype", "pytest ; extra == 'test'"] | 0.3 | 0 | |||||||
datasette-render-html | Datasette plugin that renders specified cells as HTML | [] | # datasette-render-html [](https://pypi.org/project/datasette-render-html/) [](https://circleci.com/gh/simonw/datasette-render-html) [](https://github.com/simonw/datasette-render-html/blob/master/LICENSE) This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces. This means you can store HTML in those columns and have it rendered as such on those pages. If you have a database called `docs.db` containing a `glossary` table and you want the `definition` column in that table to be rendered as HTML, you would use a `metadata.json` file that looks like this: { "databases": { "docs": { "tables": { "glossary": { "plugins": { "datasette-render-html": { "columns": ["definition"] } } } } } } } ## Security This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an [XSS attack](https://owasp.org/www-community/attacks/xss/). It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML. As such, you should only use this plugin against specific columns in specific tables, as shown in the example above. | Simon Willison | text/markdown | https://github.com/simonw/datasette-render-html | Apache License, Version 2.0 | https://pypi.org/project/datasette-render-html/ | https://pypi.org/project/datasette-render-html/ | {"Homepage": "https://github.com/simonw/datasette-render-html"} | https://pypi.org/project/datasette-render-html/0.1.2/ | ["datasette", "pytest ; extra == 'test'"] | 0.1.2 | 0 | |||||||
datasette-render-image-tags | Turn any URLs ending in .jpg/.png/.gif into img tags with width 200 | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-render-image-tags [](https://pypi.org/project/datasette-render-image-tags/) [](https://github.com/simonw/datasette-render-image-tags/releases) [](https://github.com/simonw/datasette-render-image-tags/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-render-image-tags/blob/main/LICENSE) Turn any URLs ending in .jpg/.png/.gif into img tags with width 200 ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-render-image-tags ## Usage Once installed, any cells contaning a URL that ends with `.png` or `.jpg` or `.jpeg` or `.gif` will be rendered using an image tag, with a width of 200px. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-render-image-tags python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-render-image-tags | Apache License, Version 2.0 | https://pypi.org/project/datasette-render-image-tags/ | https://pypi.org/project/datasette-render-image-tags/ | {"CI": "https://github.com/simonw/datasette-render-image-tags/actions", "Changelog": "https://github.com/simonw/datasette-render-image-tags/releases", "Homepage": "https://github.com/simonw/datasette-render-image-tags", "Issues": "https://github.com/simonw/datasette-render-image-tags/issues"} | https://pypi.org/project/datasette-render-image-tags/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-render-images | Datasette plugin that renders binary blob images using data-uris | [] | # datasette-render-images [](https://pypi.org/project/datasette-render-images/) [](https://github.com/simonw/datasette-render-images/releases) [](https://github.com/simonw/datasette-render-images/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-render-images/blob/main/LICENSE) A Datasette plugin that renders binary blob images with data-uris, using the [render_cell() plugin hook](https://datasette.readthedocs.io/en/stable/plugins.html#render-cell-value-column-table-database-datasette). ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-render-images ## Usage If a database row contains binary image data (PNG, GIF or JPEG), this plugin will detect that it is an image (using the [imghdr module](https://docs.python.org/3/library/imghdr.html) and render that cell using an `<img src="data:image/png;base64,...">` element. Here's a [demo of the plugin in action](https://datasette-render-images-demo.datasette.io/favicons/favicons). ## Creating a compatible database table You can use the [sqlite-utils insert-files](https://sqlite-utils.readthedocs.io/en/stable/cli.html#inserting-binary-data-from-files) command to insert image files into a database table: $ pip install sqlite-utils $ sqlite-utils insert-files gifs.db images *.gif See [Fun with binary data and SQLite](https://simonwillison.net/2020/Jul/30/fun-binary-data-and-sqlite/) for more on this tool. ## Configuration By default the plugin will only render images that are smaller than 100KB. You can adjust this limit using the `size_limit` plugin configuration option - for example, to increase the limit to 1MB (1000000 bytes) use the following … | Simon Willison | text/markdown | https://github.com/simonw/datasette-render-images | Apache License, Version 2.0 | https://pypi.org/project/datasette-render-images/ | https://pypi.org/project/datasette-render-images/ | {"Homepage": "https://github.com/simonw/datasette-render-images"} | https://pypi.org/project/datasette-render-images/0.3.2/ | ["datasette", "pytest ; extra == 'test'"] | 0.3.2 | 0 | |||||||
datasette-render-markdown | Datasette plugin for rendering Markdown | [] | # datasette-render-markdown [](https://pypi.org/project/datasette-render-markdown/) [](https://github.com/simonw/datasette-render-markdown/releases) [](https://github.com/simonw/datasette-render-markdown/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-render-markdown/blob/main/LICENSE) Datasette plugin for rendering Markdown. ## Installation Install this plugin in the same environment as Datasette to enable this new functionality: $ pip install datasette-render-markdown ## Usage You can explicitly list the columns you would like to treat as Markdown using [plugin configuration](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file. Add a `"datasette-render-markdown"` configuration block and use a `"columns"` key to list the columns you would like to treat as Markdown values: ```json { "plugins": { "datasette-render-markdown": { "columns": ["body"] } } } ``` This will cause any `body` column in any table to be treated as markdown and safely rendered using [Python-Markdown](https://python-markdown.github.io/). The resulting HTML is then run through [Bleach](https://bleach.readthedocs.io/) to avoid the risk of XSS security problems. Save this to `metadata.json` and run Datasette with the `--metadata` flag to load this configuration: $ datasette serve mydata.db --metadata metadata.json The configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the `entries` table in the `news.db` database: ```json { "databases": { "news": { "tables": { … | Simon Willison | text/markdown | https://github.com/simonw/datasette-render-markdown | Apache License, Version 2.0 | https://pypi.org/project/datasette-render-markdown/ | https://pypi.org/project/datasette-render-markdown/ | {"Homepage": "https://github.com/simonw/datasette-render-markdown"} | https://pypi.org/project/datasette-render-markdown/2.1/ | ["datasette", "markdown", "bleach", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | 2.1 | 0 | |||||||
datasette-render-timestamps | Datasette plugin for rendering timestamps | [] | # datasette-render-timestamps [](https://pypi.org/project/datasette-render-timestamps/) [](https://circleci.com/gh/simonw/datasette-render-timestamps) [](https://github.com/simonw/datasette-render-timestamps/blob/master/LICENSE) Datasette plugin for rendering timestamps. ## Installation Install this plugin in the same environment as Datasette to enable this new functionality: pip install datasette-render-timestamps The plugin will then look out for integer numbers that are likely to be timestamps - anything that would be a number of seconds from 5 years ago to 5 years in the future. These will then be rendered in a more readable format. ## Configuration You can disable automatic column detection in favour of explicitly listing the columns that you would like to render using [plugin configuration](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file. Add a `"datasette-render-timestamps"` configuration block and use a `"columns"` key to list the columns you would like to treat as timestamp values: ```json { "plugins": { "datasette-render-timestamps": { "columns": ["created", "updated"] } } } ``` This will cause any `created` or `updated` columns in any table to be treated as timestamps and rendered. Save this to `metadata.json` and run datasette with the `--metadata` flag to load this configuration: datasette serve mydata.db --metadata metadata.json To disable automatic timestamp detection entirely, you can use `"columnns": []`. This configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the `entries` table in the `news.db` database: ```json { "databases": { "news": { "tables": … | Simon Willison | text/markdown | https://github.com/simonw/datasette-render-timestamps | Apache License, Version 2.0 | https://pypi.org/project/datasette-render-timestamps/ | https://pypi.org/project/datasette-render-timestamps/ | {"Homepage": "https://github.com/simonw/datasette-render-timestamps"} | https://pypi.org/project/datasette-render-timestamps/1.0.1/ | ["datasette", "pytest ; extra == 'test'"] | 1.0.1 | 0 | |||||||
datasette-ripgrep | Web interface for searching your code using ripgrep, built as a Datasette plugin | [] | # datasette-ripgrep [](https://pypi.org/project/datasette-ripgrep/) [](https://github.com/simonw/datasette-ripgrep/releases) [](https://github.com/simonw/datasette-ripgrep/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-ripgrep/blob/main/LICENSE) Web interface for searching your code using [ripgrep](https://github.com/BurntSushi/ripgrep), built as a [Datasette](https://datasette.io/) plugin ## Demo Try this plugin out at https://ripgrep.datasette.io/-/ripgrep - where you can run regular expression searches across the source code of Datasette and all of the `datasette-*` plugins belonging to the [simonw GitHub user](https://github.com/simonw). Some example searches: - [with.\*AsyncClient](https://ripgrep.datasette.io/-/ripgrep?pattern=with.*AsyncClient) - regular expression search for `with.*AsyncClient` - [.plugin_config, literal=on](https://ripgrep.datasette.io/-/ripgrep?pattern=.plugin_config\(&literal=on) - a non-regular expression search for `.plugin_config(` - [with.\*AsyncClient glob=datasette/\*\*](https://ripgrep.datasette.io/-/ripgrep?pattern=with.*AsyncClient&glob=datasette%2F%2A%2A) - search for that pattern only within the `datasette/` top folder - ["sqlite-utils\[">\] glob=setup.py](https://ripgrep.datasette.io/-/ripgrep?pattern=%22sqlite-utils%5B%22%3E%5D&glob=setup.py) - a regular expression search for packages that depend on either `sqlite-utils` or `sqlite-utils>=some-version` - [test glob=!\*.html](https://ripgrep.datasette.io/-/ripgrep?pattern=test&glob=%21*.html) - search for the string `test` but exclude results in HTML files ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-ripgrep The `… | Simon Willison | text/markdown | https://github.com/simonw/datasette-ripgrep | Apache License, Version 2.0 | https://pypi.org/project/datasette-ripgrep/ | https://pypi.org/project/datasette-ripgrep/ | {"CI": "https://github.com/simonw/datasette-ripgrep/actions", "Changelog": "https://github.com/simonw/datasette-ripgrep/releases", "Homepage": "https://github.com/simonw/datasette-ripgrep", "Issues": "https://github.com/simonw/datasette-ripgrep/issues"} | https://pypi.org/project/datasette-ripgrep/0.7/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | >=3.6 | 0.7 | 0 | ||||||
datasette-rure | Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine | [] | # datasette-rure [](https://pypi.org/project/datasette-rure/) [](https://circleci.com/gh/simonw/datasette-rure) [](https://github.com/simonw/datasette-rure/blob/master/LICENSE) Datasette plugin that adds a custom SQL function for executing matches using the Rust regular expression engine Install this plugin in the same environment as Datasette to enable the `regexp()` SQL function. $ pip install datasette-rure The plugin is built on top of the [rure-python](https://github.com/davidblewett/rure-python) library by David Blewett. ## regexp() to test regular expressions You can test if a value matches a regular expression like this: select regexp('hi.*there', 'hi there') -- returns 1 select regexp('not.*there', 'hi there') -- returns 0 You can also use SQLite's custom syntax to run matches: select 'hi there' REGEXP 'hi.*there' -- returns 1 This means you can select rows based on regular expression matches - for example, to select every article where the title begins with an E or an F: select * from articles where title REGEXP '^[EF]' Try this out: [REGEXP interactive demo](https://datasette-rure-demo.datasette.io/24ways?sql=select+*+from+articles+where+title+REGEXP+%27%5E%5BEF%5D%27) ## regexp_match() to extract groups You can extract captured subsets of a pattern using `regexp_match()`. select regexp_match('.*( and .*)', title) as n from articles where n is not null -- Returns the ' and X' component of any matching titles, e.g. -- and Recognition -- and Transitions Their Place -- etc This will return the first parenthesis match when called with two arguments. You can call it with three arguments to indicate which match you would like to extract: select regexp_match('.*(and)(.*)', title, 2) as n from articles where n is not null The functio… | Simon Willison | text/markdown | https://github.com/simonw/datasette-rure | Apache License, Version 2.0 | https://pypi.org/project/datasette-rure/ | https://pypi.org/project/datasette-rure/ | {"Homepage": "https://github.com/simonw/datasette-rure"} | https://pypi.org/project/datasette-rure/0.3/ | ["datasette", "rure", "pytest ; extra == 'test'"] | 0.3 | 0 | |||||||
datasette-sandstorm-support | Authentication and permissions for Datasette on Sandstorm | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-sandstorm-support [](https://pypi.org/project/datasette-sandstorm-support/) [](https://github.com/simonw/datasette-sandstorm-support/releases) [](https://github.com/simonw/datasette-sandstorm-support/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-sandstorm-support/blob/main/LICENSE) Authentication and permissions for Datasette on Sandstorm ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-sandstorm-support ## Usage This plugin is part of [datasette-sandstorm](https://github.com/ocdtrekkie/datasette-sandstorm). ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-sandstorm-support python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-sandstorm-support | Apache License, Version 2.0 | https://pypi.org/project/datasette-sandstorm-support/ | https://pypi.org/project/datasette-sandstorm-support/ | {"CI": "https://github.com/simonw/datasette-sandstorm-support/actions", "Changelog": "https://github.com/simonw/datasette-sandstorm-support/releases", "Homepage": "https://github.com/simonw/datasette-sandstorm-support", "Issues": "https://github.com/simonw/datasette-sandstorm-support/issues"} | https://pypi.org/project/datasette-sandstorm-support/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.2 | 0 | ||||||
datasette-saved-queries | Datasette plugin that lets users save and execute queries | [] | # datasette-saved-queries [](https://pypi.org/project/datasette-saved-queries/) [](https://github.com/simonw/datasette-saved-queries/releases) [](https://github.com/simonw/datasette-saved-queries/blob/master/LICENSE) Datasette plugin that lets users save and execute queries ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-saved-queries ## Usage When the plugin is installed Datasette will automatically create a `saved_queries` table in the first connected database when it starts up. It also creates a `save_query` writable canned query which you can use to save new queries. Queries that you save will be added to the query list on the database page. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-saved-queries python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-saved-queries | Apache License, Version 2.0 | https://pypi.org/project/datasette-saved-queries/ | https://pypi.org/project/datasette-saved-queries/ | {"CI": "https://github.com/simonw/datasette-saved-queries/actions", "Changelog": "https://github.com/simonw/datasette-saved-queries/releases", "Homepage": "https://github.com/simonw/datasette-saved-queries", "Issues": "https://github.com/simonw/datasette-saved-queries/issues"} | https://pypi.org/project/datasette-saved-queries/0.2.1/ | ["datasette (>=0.45)", "sqlite-utils", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'"] | 0.2.1 | 0 | |||||||
datasette-scale-to-zero | Quit Datasette if it has not received traffic for a specified time period | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-scale-to-zero [](https://pypi.org/project/datasette-scale-to-zero/) [](https://github.com/simonw/datasette-scale-to-zero/releases) [](https://github.com/simonw/datasette-scale-to-zero/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-scale-to-zero/blob/main/LICENSE) Quit Datasette if it has not received traffic for a specified time period Some hosting providers such as [Fly](https://fly.io/) offer a scale to zero mechanism, where servers can shut down and will be automatically started when new traffic arrives. This plugin can be used to configure Datasette to quit X minutes (or seconds, or hours) after the last request it received. It can also cause the Datasette server to exit after a configured maximum time whether or not it is receiving traffic. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-scale-to-zero ## Configuration This plugin will only take effect if it has been configured. Add the following to your ``metadata.json`` or ``metadata.yml`` configuration file: ```json { "plugins": { "datasette-scale-to-zero": { "duration": "10m" } } } ``` This will cause Datasette to quit if it has not received any HTTP traffic for 10 minutes. You can set this value using a suffix of `m` for minutes, `h` for hours or `s` for seconds. To cause Datasette to exit if the server has been running for longer than a specific time, use `"max-age"`: ```json { "plugins": { "datasette-scale-to-zero": { "max-age": "10h" } } } ``` This example will exit the Datasette server if it has been running for more than ten hours. You can… | Simon Willison | text/markdown | https://github.com/simonw/datasette-scale-to-zero | Apache License, Version 2.0 | https://pypi.org/project/datasette-scale-to-zero/ | https://pypi.org/project/datasette-scale-to-zero/ | {"CI": "https://github.com/simonw/datasette-scale-to-zero/actions", "Changelog": "https://github.com/simonw/datasette-scale-to-zero/releases", "Homepage": "https://github.com/simonw/datasette-scale-to-zero", "Issues": "https://github.com/simonw/datasette-scale-to-zero/issues"} | https://pypi.org/project/datasette-scale-to-zero/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.2 | 0 | ||||||
datasette-schema-versions | Datasette plugin that shows the schema version of every attached database | [] | # datasette-schema-versions [](https://pypi.org/project/datasette-schema-versions/) [](https://github.com/simonw/datasette-schema-versions/releases) [](https://github.com/simonw/datasette-schema-versions/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-schema-versions/blob/main/LICENSE) Datasette plugin that shows the schema version of every attached database ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-schema-versions ## Usage Visit `/-/schema-versions` on your Datasette instance to see a numeric version for the schema for each of your databases. Any changes you make to the schema will increase this version number. | Simon Willison | text/markdown | https://github.com/simonw/datasette-schema-versions | Apache License, Version 2.0 | https://pypi.org/project/datasette-schema-versions/ | https://pypi.org/project/datasette-schema-versions/ | {"CI": "https://github.com/simonw/datasette-schema-versions/actions", "Changelog": "https://github.com/simonw/datasette-schema-versions/releases", "Homepage": "https://github.com/simonw/datasette-schema-versions", "Issues": "https://github.com/simonw/datasette-schema-versions/issues"} | https://pypi.org/project/datasette-schema-versions/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "httpx ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | 0.2 | 0 | |||||||
datasette-seaborn | Statistical visualizations for Datasette using Seaborn | [] | # datasette-seaborn [](https://pypi.org/project/datasette-seaborn/) [](https://github.com/simonw/datasette-seaborn/releases) [](https://github.com/simonw/datasette-seaborn/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-seaborn/blob/main/LICENSE) Statistical visualizations for Datasette using Seaborn ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-seaborn ## Usage Navigate to the new `.seaborn` extension for any Datasette table. The `_seaborn` argument specifies a method on `sns` to execute, e.g. `?_seaborn=relplot`. Extra arguments to those methods can be specified using e.g. `&_seaborn_x=column_name`. ## Configuration The plugin implements a default rendering time limit of five seconds. You can customize this limit using the `render_time_limit` setting, which accepts a floating point number of seconds. Add this to your `metadata.json`: ```json { "plugins": { "datasette-seaborn": { "render_time_limit": 1.0 } } } ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-seaborn python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-seaborn | Apache License, Version 2.0 | https://pypi.org/project/datasette-seaborn/ | https://pypi.org/project/datasette-seaborn/ | {"CI": "https://github.com/simonw/datasette-seaborn/actions", "Changelog": "https://github.com/simonw/datasette-seaborn/releases", "Homepage": "https://github.com/simonw/datasette-seaborn", "Issues": "https://github.com/simonw/datasette-seaborn/issues"} | https://pypi.org/project/datasette-seaborn/0.2a0/ | ["datasette (>=0.50)", "seaborn (>=0.11.0)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | 0.2a0 | 0 | |||||||
datasette-search-all | Datasette plugin for searching all searchable tables at once | [] | # datasette-search-all [](https://pypi.org/project/datasette-search-all/) [](https://github.com/simonw/datasette-search-all/releases) [](https://github.com/simonw/datasette-search-all/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-search-all/blob/main/LICENSE) Datasette plugin for searching all searchable tables at once. ## Installation Install the plugin in the same Python environment as Datasette: pip install datasette-search-all ## Background See [datasette-search-all: a new plugin for searching multiple Datasette tables at once](https://simonwillison.net/2020/Mar/9/datasette-search-all/) for background on this project. You can try the plugin out at https://fara.datasettes.com/ ## Usage This plugin only works if at least one of the tables connected to your Datasette instance has been configured for SQLite's full-text search. The [Datasette search documentation](https://docs.datasette.io/en/stable/full_text_search.html) includes details on how to enable full-text search for a table. You can also use the following tools: * [sqlite-utils](https://sqlite-utils.datasette.io/en/stable/cli.html#configuring-full-text-search) includes a command-line tool for enabling full-text search. * [datasette-enable-fts](https://github.com/simonw/datasette-enable-fts) is a Datasette plugin that adds a web interface for enabling search for specific columns. If the plugin detects at least one searchable table it will add a search form to the homepage. You can also navigate to `/-/search` on your Datasette instance to use the search interface directly. ## Screenshot ", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.7 | 1.1 | 0 | ||||||
datasette-sentry | Datasette plugin for configuring Sentry | ["License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8"] | # datasette-sentry [](https://pypi.org/project/datasette-sentry/) [](https://github.com/simonw/datasette-sentry/releases) [](https://github.com/simonw/datasette-sentry/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-sentry/blob/main/LICENSE) Datasette plugin for configuring Sentry for error reporting ## Installation pip install datasette-sentry ## Usage This plugin only takes effect if your `metadata.json` file contains relevant top-level plugin configuration in a `"datasette-sentry"` configuration key. You will need a Sentry DSN - see their [Getting Started instructions](https://docs.sentry.io/error-reporting/quickstart/?platform=python). Add it to `metadata.json` like this: ```json { "plugins": { "datasette-sentry": { "dsn": "https://KEY@sentry.io/PROJECTID" } } } ``` Settings in `metadata.json` are visible to anyone who visits the `/-/metadata` URL so this is a good place to take advantage of Datasette's [secret configuration values](https://datasette.readthedocs.io/en/stable/plugins.html#secret-configuration-values), in which case your configuration will look more like this: ```json { "plugins": { "datasette-sentry": { "dsn": { "$env": "SENTRY_DSN" } } } } ``` Then make a `SENTRY_DSN` environment variable available to Datasette. ## Configuration In addition to the `dsn` setting, you can also configure the Sentry [sample rate](https://docs.sentry.io/platforms/python/configuration/sampling/) by setting `sample_rate` to a floating point number between 0 and 1. For example, to capture 25% of errors you would do this: ```json { "plugins": { "datasette… | Simon Willison | text/markdown | https://github.com/simonw/datasette-sentry | Apache License, Version 2.0 | https://pypi.org/project/datasette-sentry/ | https://pypi.org/project/datasette-sentry/ | {"Homepage": "https://github.com/simonw/datasette-sentry"} | https://pypi.org/project/datasette-sentry/0.3/ | ["sentry-sdk", "datasette (>=0.62)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | 0.3 | 0 | |||||||
datasette-show-errors | Datasette plugin for displaying error tracebacks | [] | # datasette-show-errors [](https://pypi.org/project/datasette-show-errors/) [](https://circleci.com/gh/simonw/datasette-show-errors) [](https://github.com/simonw/datasette-show-errors/blob/master/LICENSE) Datasette plugin for displaying error tracebacks. ## Installation pip install datasette-show-errors ## Usage Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page. Be careful not to use this in a context that might expose sensitive information. | Simon Willison | text/markdown | https://github.com/simonw/datasette-show-errors | Apache License, Version 2.0 | https://pypi.org/project/datasette-show-errors/ | https://pypi.org/project/datasette-show-errors/ | {"Homepage": "https://github.com/simonw/datasette-show-errors"} | https://pypi.org/project/datasette-show-errors/0.2/ | ["starlette", "datasette", "pytest ; extra == 'test'"] | 0.2 | 0 | |||||||
datasette-sitemap | Generate sitemap.xml for Datasette sites | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-sitemap [](https://pypi.org/project/datasette-sitemap/) [](https://github.com/simonw/datasette-sitemap/releases) [](https://github.com/simonw/datasette-sitemap/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-sitemap/blob/main/LICENSE) Generate sitemap.xml for Datasette sites ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-sitemap ## Demo This plugin is used for the sitemap on [til.simonwillison.net](https://til.simonwillison.net/): - https://til.simonwillison.net/sitemap.xml Here's [the configuration](https://github.com/simonw/til/blob/d4f67743a90a67100b46145986b2dec6f8d96583/metadata.yaml#L14-L16) used for that sitemap. ## Usage Once configured, this plugin adds a sitemap at `/sitemap.xml` with a list of URLs. This list is defined using a SQL query in `metadata.json` (or `.yml`) that looks like this: ```json { "plugins": { "datasette-sitemap": { "query": "select '/' || id as path from my_table" } } } ``` Using `metadata.yml` allows for multi-line SQL queries which can be easier to maintain: ```yaml plugins: datasette-sitemap: query: | select '/' || id as path from my_table ``` The SQL query must return a column called `path`. The values in this column must begin with a `/`. They will be used to generate a sitemap that looks like this: ```xml <?xml version="1.0" encoding="UTF-8"?> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"> <url><loc>https://example.com/1</loc></url> <url><loc>https://example.com/2</loc></url> </urlset> ``` You can use ``UNION`` in your SQL query to combine results from multiple tables, or include liter… | Simon Willison | text/markdown | https://github.com/simonw/datasette-sitemap | Apache License, Version 2.0 | https://pypi.org/project/datasette-sitemap/ | https://pypi.org/project/datasette-sitemap/ | {"CI": "https://github.com/simonw/datasette-sitemap/actions", "Changelog": "https://github.com/simonw/datasette-sitemap/releases", "Homepage": "https://github.com/simonw/datasette-sitemap", "Issues": "https://github.com/simonw/datasette-sitemap/issues"} | https://pypi.org/project/datasette-sitemap/1.0/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "datasette-block-robots ; extra == 'test'"] | >=3.7 | 1.0 | 0 | ||||||
datasette-socrata | Import data from Socrata into Datasette | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-socrata [](https://pypi.org/project/datasette-socrata/) [](https://github.com/simonw/datasette-socrata/releases) [](https://github.com/simonw/datasette-socrata/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-socrata/blob/main/LICENSE) Import data from Socrata into Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-socrata ## Usage Make sure you have [enabled WAL mode](https://til.simonwillison.net/sqlite/enabling-wal-mode) on your database files before using this plugin. Once installed, an interface for importing data from Socrata will become available at this URL: /-/import-socrata Users will be able to paste in a URL to a dataset on Socrata in order to initialize an import. You can also pre-fill the form by passing a `?url=` parameter, for example: /-/import-socrata?url=https://data.sfgov.org/City-Infrastructure/Street-Tree-List/tkzw-k3nq Any database that is attached to Datasette, is NOT loaded as immutable (with the `-i` option) and that has WAL mode enabled will be available for users to import data into. The `import-socrata` permission governs access. By default the `root` actor (accessible using `datasette --root` to start Datasette) is granted that permission. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to other users. ## Configuration If you only want Socrata imports to be allowed to a specific database, you can configure that using plugin configration in `metadata.yml`: ```yaml plugins: datasette-socrata: database: socrata ``` ## Development To set up this… | Simon Willison | text/markdown | https://github.com/simonw/datasette-socrata | Apache License, Version 2.0 | https://pypi.org/project/datasette-socrata/ | https://pypi.org/project/datasette-socrata/ | {"CI": "https://github.com/simonw/datasette-socrata/actions", "Changelog": "https://github.com/simonw/datasette-socrata/releases", "Homepage": "https://github.com/simonw/datasette-socrata", "Issues": "https://github.com/simonw/datasette-socrata/issues"} | https://pypi.org/project/datasette-socrata/0.3/ | ["datasette", "sqlite-utils (>=3.27)", "datasette-low-disk-space-hook", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "pytest-httpx ; extra == 'test'"] | >=3.7 | 0.3 | 0 | ||||||
datasette-sqlite-fts4 | Datasette plugin exposing SQL functions from sqlite-fts4 | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-sqlite-fts4 [](https://pypi.org/project/datasette-sqlite-fts4/) [](https://github.com/simonw/datasette-sqlite-fts4/releases) [](https://github.com/simonw/datasette-sqlite-fts4/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-sqlite-fts4/blob/main/LICENSE) Datasette plugin that exposes the custom SQL functions from [sqlite-fts4](https://github.com/simonw/sqlite-fts4). [Interactive demo](https://datasette-sqlite-fts4.datasette.io/24ways-fts4?sql=select%0D%0A++++json_object%28%0D%0A++++++++"label"%2C+articles.title%2C+"href"%2C+articles.url%0D%0A++++%29+as+article%2C%0D%0A++++articles.author%2C%0D%0A++++rank_score%28matchinfo%28articles_fts%2C+"pcx"%29%29+as+score%2C%0D%0A++++rank_bm25%28matchinfo%28articles_fts%2C+"pcnalx"%29%29+as+bm25%2C%0D%0A++++json_object%28%0D%0A++++++++"pre"%2C+annotate_matchinfo%28matchinfo%28articles_fts%2C+"pcxnalyb"%29%2C+"pcxnalyb"%29%0D%0A++++%29+as+annotated_matchinfo%2C%0D%0A++++matchinfo%28articles_fts%2C+"pcxnalyb"%29+as+matchinfo%2C%0D%0A++++decode_matchinfo%28matchinfo%28articles_fts%2C+"pcxnalyb"%29%29+as+decoded_matchinfo%0D%0Afrom%0D%0A++++articles_fts+join+articles+on+articles.rowid+%3D+articles_fts.rowid%0D%0Awhere%0D%0A++++articles_fts+match+%3Asearch%0D%0Aorder+by+bm25&search=jquery+maps). Read [Exploring search relevance algorithms with SQLite](https://simonwillison.net/2019/Jan/7/exploring-search-relevance-algorithms-sqlite/) for further details on this project. ## Installation pip install datasette-sqlite-fts4 If you are deploying a database using `datasette publish` you can include this plugin using the `--install` option: datasette publish now mydb.db --install=datasette-sqlite-fts4 | Simon Willison | text/markdown | https://github.com/simonw/datasette-sqlite-fts4 | Apache License, Version 2.0 | https://pypi.org/project/datasette-sqlite-fts4/ | https://pypi.org/project/datasette-sqlite-fts4/ | {"CI": "https://github.com/simonw/datasette-sqlite-fts4/actions", "Changelog": "https://github.com/simonw/datasette-sqlite-fts4/releases", "Homepage": "https://github.com/simonw/datasette-sqlite-fts4", "Issues": "https://github.com/simonw/datasette-sqlite-fts4/issues"} | https://pypi.org/project/datasette-sqlite-fts4/0.3.2/ | ["datasette", "sqlite-fts4 (>=1.0.3)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.3.2 | 0 | ||||||
datasette-template-request | Expose the Datasette request object to custom templates | [] | # datasette-template-request [](https://pypi.org/project/datasette-template-request/) [](https://github.com/simonw/datasette-template-request/releases) [](https://github.com/simonw/datasette-template-request/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-template-request/blob/main/LICENSE) Expose the Datasette request object to custom templates ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-template-request ## Usage Once this plugin is installed, Datasette [custom templates](https://docs.datasette.io/en/stable/custom_templates.html) can use `{{ request }}` to access the current [request object](https://docs.datasette.io/en/stable/internals.html#request-object). For example, to access `?name=Cleo` in the query string a template could use this: Name: {{ request.args.name }} ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-template-request python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest | Simon Willison | text/markdown | https://github.com/simonw/datasette-template-request | Apache License, Version 2.0 | https://pypi.org/project/datasette-template-request/ | https://pypi.org/project/datasette-template-request/ | {"CI": "https://github.com/simonw/datasette-template-request/actions", "Changelog": "https://github.com/simonw/datasette-template-request/releases", "Homepage": "https://github.com/simonw/datasette-template-request", "Issues": "https://github.com/simonw/datasette-template-request/issues"} | https://pypi.org/project/datasette-template-request/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.1 | 0 | ||||||
datasette-template-sql | Datasette plugin for executing SQL queries from templates | [] | # datasette-template-sql [](https://pypi.org/project/datasette-template-sql/) [](https://github.com/simonw/datasette-template-sql/releases) [](https://github.com/simonw/datasette-template-sql/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-template-sql/blob/main/LICENSE) Datasette plugin for executing SQL queries from templates. ## Examples [www.niche-museums.com](https://www.niche-museums.com/) uses this plugin to run a custom themed website on top of Datasette. The full source code for the site [is here](https://github.com/simonw/museums) - see also [niche-museums.com, powered by Datasette](https://simonwillison.net/2019/Nov/25/niche-museums/). [simonw/til](https://github.com/simonw/til) is another simple example, described in [Using a self-rewriting README powered by GitHub Actions to track TILs](https://simonwillison.net/2020/Apr/20/self-rewriting-readme/). ## Installation Run this command to install the plugin in the same environment as Datasette: $ pip install datasette-template-sql ## Usage This plugin makes a new function, `sql(sql_query)`, available to your Datasette templates. You can use it like this: ```html+jinja {% for row in sql("select 1 + 1 as two, 2 * 4 as eight") %} {% for key in row.keys() %} {{ key }}: {{ row[key] }}<br> {% endfor %} {% endfor %} ``` The plugin will execute SQL against the current database for the page in `database.html`, `table.html` and `row.html` templates. If a template does not have a current database (`index.html` for example) the query will execute against the first attached database. ### Queries with arguments You can construct a SQL query using `?` or `:name` parameter syntax by … | Simon Willison | text/markdown | https://github.com/simonw/datasette-template-sql | Apache License, Version 2.0 | https://pypi.org/project/datasette-template-sql/ | https://pypi.org/project/datasette-template-sql/ | {"CI": "https://github.com/simonw/datasette-template-sql/actions", "Changelog": "https://github.com/simonw/datasette-template-sql/releases", "Homepage": "https://github.com/simonw/datasette-template-sql", "Issues": "https://github.com/simonw/datasette-template-sql/issues"} | https://pypi.org/project/datasette-template-sql/1.0.2/ | ["datasette (>=0.54)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "sqlite-utils ; extra == 'test'"] | >=3.6 | 1.0.2 | 0 | ||||||
datasette-tiddlywiki | Run TiddlyWiki in Datasette and save Tiddlers to a SQLite database | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-tiddlywiki [](https://pypi.org/project/datasette-tiddlywiki/) [](https://github.com/simonw/datasette-tiddlywiki/releases) [](https://github.com/simonw/datasette-tiddlywiki/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-tiddlywiki/blob/main/LICENSE) Run [TiddlyWiki](https://tiddlywiki.com/) in Datasette and save Tiddlers to a SQLite database ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-tiddlywiki ## Usage Start Datasette with a `tiddlywiki.db` database. You can create it if it does not yet exist using `--create`. You need to be signed in as the `root` user to write to the wiki, so use the `--root` option and click on the link it provides: % datasette tiddlywiki.db --create --root http://127.0.0.1:8001/-/auth-token?token=456670f1e8d01a8a33b71e17653130de17387336e29afcdfb4ab3d18261e6630 # ... Navigate to `/-/tiddlywiki` on your instance to interact with TiddlyWiki. ## Authentication and permissions By default, the wiki can be read by anyone who has permission to read the `tiddlywiki.db` database. Only the signed in `root` user can write to it. You can sign in using the `--root` option described above, or you can set a password for that user using the [datasette-auth-passwords](https://datasette.io/plugins/datasette-auth-passwords) plugin and sign in using the `/-/login` page. You can use the `edit-tiddlywiki` permission to grant edit permisions to other users, using another plugin such as [datasette-permissions-sql](https://datasette.io/plugins/datasette-permissions-sql). You can use the `view-database` permission against the `tiddlywiki` database to control who can vie… | Simon Willison | text/markdown | https://github.com/simonw/datasette-tiddlywiki | Apache License, Version 2.0 | https://pypi.org/project/datasette-tiddlywiki/ | https://pypi.org/project/datasette-tiddlywiki/ | {"CI": "https://github.com/simonw/datasette-tiddlywiki/actions", "Changelog": "https://github.com/simonw/datasette-tiddlywiki/releases", "Homepage": "https://github.com/simonw/datasette-tiddlywiki", "Issues": "https://github.com/simonw/datasette-tiddlywiki/issues"} | https://pypi.org/project/datasette-tiddlywiki/0.2/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.6 | 0.2 | 0 | ||||||
datasette-tiles | Mapping tile server for Datasette, serving tiles from MBTiles packages | [] | # datasette-tiles [](https://pypi.org/project/datasette-tiles/) [](https://github.com/simonw/datasette-tiles/releases) [](https://github.com/simonw/datasette-tiles/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-tiles/blob/main/LICENSE) Datasette plugin for serving MBTiles map tiles ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-tiles ## Demo You can try this plugin out at https://datasette-tiles-demo.datasette.io/-/tiles ## Usage This plugin scans all database files connected to Datasette to see if any of them are valid MBTiles databases. It can then serve tiles from those databases at the following URL: /-/tiles/db-name/zoom/x/y.png An example map for each database demonstrating the configured minimum and maximum zoom for that database can be found at `/-/tiles/db-name` - this can also be accessed via the table and database action menus for that database. Visit `/-/tiles` for an index page of attached valid databases. You can install the [datasette-basemap](https://datasette.io/plugins/datasette-basemap) plugin to get a `basemap` default set of tiles, handling zoom levels 0 to 6 using OpenStreetMap. ### Tile coordinate systems There are two tile coordinate systems in common use for online maps. The first is used by OpenStreetMap and Google Maps, the second is from a specification called [Tile Map Service](https://en.wikipedia.org/wiki/Tile_Map_Service), or TMS. Both systems use three components: `z/x/y` - where `z` is the zoom level, `x` is the column and `y` is the row. The difference is in the way the `y` value is counted. OpenStreetMap has y=0 at the top. TMS has y=0 at the bottom. An illustrat… | Simon Willison | text/markdown | https://github.com/simonw/datasette-tiles | Apache License, Version 2.0 | https://pypi.org/project/datasette-tiles/ | https://pypi.org/project/datasette-tiles/ | {"CI": "https://github.com/simonw/datasette-tiles/actions", "Changelpog": "https://github.com/simonw/datasette-tiles/releases", "Homepage": "https://github.com/simonw/datasette-tiles", "Issues": "https://github.com/simonw/datasette-tiles/issues"} | https://pypi.org/project/datasette-tiles/0.6.1/ | ["datasette", "datasette-leaflet (>=0.2.2)", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "datasette-basemap (>=0.2) ; extra == 'test'"] | >=3.6 | 0.6.1 | 0 | ||||||
datasette-total-page-time | Add a note to the Datasette footer measuring the total page load time | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-total-page-time [](https://pypi.org/project/datasette-total-page-time/) [](https://github.com/simonw/datasette-total-page-time/releases) [](https://github.com/simonw/datasette-total-page-time/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-total-page-time/blob/main/LICENSE) Add a note to the Datasette footer measuring the total page load time ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-total-page-time ## Usage Once this plugin is installed, a note will appear in the footer of every page showing how long the page took to generate. > Queries took 326.74ms · Page took 386.310ms ## How it works Measuring how long a page takes to load and then injecting that note into the page is tricky, because you need to finish generating the page before you know how long it took to load it! This plugin uses the [asgi_wrapper](https://docs.datasette.io/en/stable/plugin_hooks.html#asgi-wrapper-datasette) plugin hook to measure the time taken by Datasette and then inject the following JavaScript at the bottom of the response, after the closing `</html>` tag but with the correct measured value: ```html <script> let footer = document.querySelector("footer"); if (footer) { let ms = 37.224; let s = ` · Page took ${ms.toFixed(3)}ms`; footer.innerHTML += s; } </script> ``` This script is injected only on pages with the `text/html` content type - so it should not affect JSON or CSV returned by Datasette. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-total-page-time python3 -mvenv venv s… | Simon Willison | text/markdown | https://github.com/simonw/datasette-total-page-time | Apache License, Version 2.0 | https://pypi.org/project/datasette-total-page-time/ | https://pypi.org/project/datasette-total-page-time/ | {"CI": "https://github.com/simonw/datasette-total-page-time/actions", "Changelog": "https://github.com/simonw/datasette-total-page-time/releases", "Homepage": "https://github.com/simonw/datasette-total-page-time", "Issues": "https://github.com/simonw/datasette-total-page-time/issues"} | https://pypi.org/project/datasette-total-page-time/0.1/ | ["datasette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1 | 0 | ||||||
datasette-upload-csvs | Datasette plugin for uploading CSV files and converting them to database tables | [] | # datasette-upload-csvs [](https://pypi.org/project/datasette-upload-csvs/) [](https://github.com/simonw/datasette-upload-csvs/releases) [](https://github.com/simonw/datasette-upload-csvs/blob/main/LICENSE) Datasette plugin for uploading CSV files and converting them to database tables ## Installation datasette install datasette-upload-csvs ## Usage The plugin adds an interface at `/-/upload-csvs` for uploading a CSV file and using it to create a new database table. By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page. The `upload-csvs` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface. | Simon Willison | text/markdown | https://datasette.io/plugins/datasette-upload-csvs | Apache License, Version 2.0 | https://pypi.org/project/datasette-upload-csvs/ | https://pypi.org/project/datasette-upload-csvs/ | {"CI": "https://github.com/simonw/datasette-upload-csvs/actions", "Changelog": "https://github.com/simonw/datasette-upload-csvs/releases", "Homepage": "https://datasette.io/plugins/datasette-upload-csvs", "Issues": "https://github.com/simonw/datasette-upload-csvs/issues"} | https://pypi.org/project/datasette-upload-csvs/0.8.2/ | ["datasette (>=0.61)", "asgi-csrf (>=0.7)", "starlette", "aiofiles", "python-multipart", "charset-normalizer", "sqlite-utils", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'", "asgiref ; extra == 'test'", "httpx ; extra == 'test'", "asgi-lifespan ; extra == 'test'"] | >=3.7 | 0.8.2 | 0 | ||||||
datasette-upload-dbs | Upload SQLite database files to Datasette | ["Framework :: Datasette", "License :: OSI Approved :: Apache Software License"] | # datasette-upload-dbs [](https://pypi.org/project/datasette-upload-dbs/) [](https://github.com/simonw/datasette-upload-dbs/releases) [](https://github.com/simonw/datasette-upload-dbs/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-upload-dbs/blob/main/LICENSE) Upload SQLite database files to Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-upload-dbs ## Configuration This plugin requires you to configure a directory in which uploaded files will be stored. On startup, Datasette will automatically load any SQLite files that it finds in that directory. This means it is safe to restart your server in between file uploads. To configure the directory as `/home/datasette/uploads`, add this to a `metadata.yml` configuration file: ```yaml plugins: datasette-upload-dbs: directory: /home/datasette/uploads ``` Or if you are using `metadata.json`: ```json { "plugins": { "datasette-upload-dbs": { "directory": "/home/datasette/uploads" } } } ``` You can use `"."` for the current folder when the server starts, or `"uploads"` for a folder relative to that folder. The folder will be created on startup if it does not already exist. Then start Datasette like this: datasette -m metadata.yml ## Usage Only users with the `upload-dbs` permission will be able to upload files. The `root` user has this permission by default - other users can be granted access using permission plugins, see the [Permissions](https://docs.datasette.io/en/stable/authentication.html#permissions) documentation for details. To start Datasette as the root user, run this: datasette -m metadata.yml --roo… | Simon Willison | text/markdown | https://github.com/simonw/datasette-upload-dbs | Apache License, Version 2.0 | https://pypi.org/project/datasette-upload-dbs/ | https://pypi.org/project/datasette-upload-dbs/ | {"CI": "https://github.com/simonw/datasette-upload-dbs/actions", "Changelog": "https://github.com/simonw/datasette-upload-dbs/releases", "Homepage": "https://github.com/simonw/datasette-upload-dbs", "Issues": "https://github.com/simonw/datasette-upload-dbs/issues"} | https://pypi.org/project/datasette-upload-dbs/0.1.2/ | ["datasette", "starlette", "pytest ; extra == 'test'", "pytest-asyncio ; extra == 'test'"] | >=3.7 | 0.1.2 | 0 |
Advanced export
JSON shape: default, array, newline-delimited, object
CREATE TABLE [pypi_packages] ( [name] TEXT PRIMARY KEY, [summary] TEXT, [classifiers] TEXT, [description] TEXT, [author] TEXT, [author_email] TEXT, [description_content_type] TEXT, [home_page] TEXT, [keywords] TEXT, [license] TEXT, [maintainer] TEXT, [maintainer_email] TEXT, [package_url] TEXT, [platform] TEXT, [project_url] TEXT, [project_urls] TEXT, [release_url] TEXT, [requires_dist] TEXT, [requires_python] TEXT, [version] TEXT, [yanked] INTEGER, [yanked_reason] TEXT );