name,summary,classifiers,description,author,author_email,description_content_type,home_page,keywords,license,maintainer,maintainer_email,package_url,platform,project_url,project_urls,release_url,requires_dist,requires_python,version,yanked,yanked_reason csv-diff,Python CLI tool and library for diffing CSV and JSON files,"[""Development Status :: 4 - Beta"", ""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7""]","# csv-diff [![PyPI](https://img.shields.io/pypi/v/csv-diff.svg)](https://pypi.org/project/csv-diff/) [![Changelog](https://img.shields.io/github/v/release/simonw/csv-diff?include_prereleases&label=changelog)](https://github.com/simonw/csv-diff/releases) [![Tests](https://github.com/simonw/csv-diff/workflows/Test/badge.svg)](https://github.com/simonw/csv-diff/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/csv-diff/blob/main/LICENSE) Tool for viewing the difference between two CSV, TSV or JSON files. See [Generating a commit log for San Francisco’s official list of trees](https://simonwillison.net/2019/Mar/13/tree-history/) (and the [sf-tree-history repo commit log](https://github.com/simonw/sf-tree-history/commits)) for background information on this project. ## Installation pip install csv-diff ## Usage Consider two CSV files: `one.csv` id,name,age 1,Cleo,4 2,Pancakes,2 `two.csv` id,name,age 1,Cleo,5 3,Bailey,1 `csv-diff` can show a human-readable summary of differences between the files: $ csv-diff one.csv two.csv --key=id 1 row changed, 1 row added, 1 row removed 1 row changed Row 1 age: ""4"" => ""5"" 1 row added id: 3 name: Bailey age: 1 1 row removed id: 2 name: Pancakes age: 2 The `--key=id` option means that the `id` column should be treated as the unique key, to identify which records have changed. The tool will automatically detect if your files are comma- or tab-separated. You can over-ride this automatic detection and force the tool to use a specific format using `--format=tsv` or `--format=csv`. You can also feed it JSON files, provided they are a JSON array of objects where each object has the same keys. Use `--format=json` if your input files are JSON. Use `--show-unchanged` to include full details of the unchanged values for rows with at least one change in the diff output: % csv-diff one.csv two.csv --key=id --show-unchanged 1 row changed id: 1 age: ""4"" => ""5"" Unchanged: name: ""Cleo"" You can use the `--json` option to get a machine-readable difference: $ csv-diff one.csv two.csv --key=id --json { ""added"": [ { ""id"": ""3"", ""name"": ""Bailey"", ""age"": ""1"" } ], ""removed"": [ { ""id"": ""2"", ""name"": ""Pancakes"", ""age"": ""2"" } ], ""changed"": [ { ""key"": ""1"", ""changes"": { ""age"": [ ""4"", ""5"" ] } } ], ""columns_added"": [], ""columns_removed"": [] } ## As a Python library You can also import the Python library into your own code like so: from csv_diff import load_csv, compare diff = compare( load_csv(open(""one.csv""), key=""id""), load_csv(open(""two.csv""), key=""id"") ) `diff` will now contain the same data structure as the output in the `--json` example above. If the columns in the CSV have changed, those added or removed columns will be ignored when calculating changes made to specific rows. ",Simon Willison,,text/markdown,https://github.com/simonw/csv-diff,,"Apache License, Version 2.0",,,https://pypi.org/project/csv-diff/,,https://pypi.org/project/csv-diff/,"{""Homepage"": ""https://github.com/simonw/csv-diff""}",https://pypi.org/project/csv-diff/1.1/,"[""click"", ""dictdiffer"", ""pytest ; extra == 'test'""]",,1.1,0, csvs-to-sqlite,Convert CSV files into a SQLite database,"[""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7"", ""Programming Language :: Python :: 3.8"", ""Programming Language :: Python :: 3.9"", ""Topic :: Database""]","# csvs-to-sqlite [![PyPI](https://img.shields.io/pypi/v/csvs-to-sqlite.svg)](https://pypi.org/project/csvs-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/simonw/csvs-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/csvs-to-sqlite/releases) [![Tests](https://github.com/simonw/csvs-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/csvs-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/csvs-to-sqlite/blob/main/LICENSE) Convert CSV files into a SQLite database. Browse and publish that SQLite database with [Datasette](https://github.com/simonw/datasette). Basic usage: csvs-to-sqlite myfile.csv mydatabase.db This will create a new SQLite database called `mydatabase.db` containing a single table, `myfile`, containing the CSV content. You can provide multiple CSV files: csvs-to-sqlite one.csv two.csv bundle.db The `bundle.db` database will contain two tables, `one` and `two`. This means you can use wildcards: csvs-to-sqlite ~/Downloads/*.csv my-downloads.db If you pass a path to one or more directories, the script will recursively search those directories for CSV files and create tables for each one. csvs-to-sqlite ~/path/to/directory all-my-csvs.db ## Handling TSV (tab-separated values) You can use the `-s` option to specify a different delimiter. If you want to use a tab character you'll need to apply shell escaping like so: csvs-to-sqlite my-file.tsv my-file.db -s $'\t' ## Refactoring columns into separate lookup tables Let's say you have a CSV file that looks like this: county,precinct,office,district,party,candidate,votes Clark,1,President,,REP,John R. Kasich,5 Clark,2,President,,REP,John R. Kasich,0 Clark,3,President,,REP,John R. Kasich,7 ([Real example taken from the Open Elections project](https://github.com/openelections/openelections-data-sd/blob/master/2016/20160607__sd__primary__clark__precinct.csv)) You can now convert selected columns into separate lookup tables using the new `--extract-column` option (shortname: `-c`) - for example: csvs-to-sqlite openelections-data-*/*.csv \ -c county:County:name \ -c precinct:Precinct:name \ -c office -c district -c party -c candidate \ openelections.db The format is as follows: column_name:optional_table_name:optional_table_value_column_name If you just specify the column name e.g. `-c office`, the following table will be created: CREATE TABLE ""office"" ( ""id"" INTEGER PRIMARY KEY, ""value"" TEXT ); If you specify all three options, e.g. `-c precinct:Precinct:name` the table will look like this: CREATE TABLE ""Precinct"" ( ""id"" INTEGER PRIMARY KEY, ""name"" TEXT ); The original tables will be created like this: CREATE TABLE ""ca__primary__san_francisco__precinct"" ( ""county"" INTEGER, ""precinct"" INTEGER, ""office"" INTEGER, ""district"" INTEGER, ""party"" INTEGER, ""candidate"" INTEGER, ""votes"" INTEGER, FOREIGN KEY (county) REFERENCES County(id), FOREIGN KEY (party) REFERENCES party(id), FOREIGN KEY (precinct) REFERENCES Precinct(id), FOREIGN KEY (office) REFERENCES office(id), FOREIGN KEY (candidate) REFERENCES candidate(id) ); They will be populated with IDs that reference the new derived tables. ## Installation $ pip install csvs-to-sqlite `csvs-to-sqlite` now requires Python 3. If you are running Python 2 you can install the last version to support Python 2: $ pip install csvs-to-sqlite==0.9.2 ## csvs-to-sqlite --help ``` Usage: csvs-to-sqlite [OPTIONS] PATHS... DBNAME PATHS: paths to individual .csv files or to directories containing .csvs DBNAME: name of the SQLite database file to create Options: -s, --separator TEXT Field separator in input .csv -q, --quoting INTEGER Control field quoting behavior per csv.QUOTE_* constants. Use one of QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3). --skip-errors Skip lines with too many fields instead of stopping the import --replace-tables Replace tables if they already exist -t, --table TEXT Table to use (instead of using CSV filename) -c, --extract-column TEXT One or more columns to 'extract' into a separate lookup table. If you pass a simple column name that column will be replaced with integer foreign key references to a new table of that name. You can customize the name of the table like so: state:States:state_name This will pull unique values from the 'state' column and use them to populate a new 'States' table, with an id column primary key and a state_name column containing the strings from the original column. -d, --date TEXT One or more columns to parse into ISO formatted dates -dt, --datetime TEXT One or more columns to parse into ISO formatted datetimes -df, --datetime-format TEXT One or more custom date format strings to try when parsing dates/datetimes -pk, --primary-key TEXT One or more columns to use as the primary key -f, --fts TEXT One or more columns to use to populate a full- text index -i, --index TEXT Add index on this column (or a compound index with -i col1,col2) --shape TEXT Custom shape for the DB table - format is csvcol:dbcol(TYPE),... --filename-column TEXT Add a column with this name and populate with CSV file name --fixed-column ... Populate column with a fixed string --fixed-column-int ... Populate column with a fixed integer --fixed-column-float ... Populate column with a fixed float --no-index-fks Skip adding index to foreign key columns created using --extract-column (default is to add them) --no-fulltext-fks Skip adding full-text index on values extracted using --extract-column (default is to add them) --just-strings Import all columns as text strings by default (and, if specified, still obey --shape, --date/datetime, and --datetime-format) --version Show the version and exit. --help Show this message and exit. ``` ",Simon Willison,,text/markdown,https://github.com/simonw/csvs-to-sqlite,,"Apache License, Version 2.0",,,https://pypi.org/project/csvs-to-sqlite/,,https://pypi.org/project/csvs-to-sqlite/,"{""Homepage"": ""https://github.com/simonw/csvs-to-sqlite""}",https://pypi.org/project/csvs-to-sqlite/1.3/,"[""click (~=7.0)"", ""dateparser (>=1.0)"", ""pandas (>=1.0)"", ""py-lru-cache (~=0.1.4)"", ""six"", ""pytest ; extra == 'test'"", ""cogapp ; extra == 'test'""]",,1.3,0, datasette,An open source multi-tool for exploring and publishing data,"[""Development Status :: 4 - Beta"", ""Framework :: Datasette"", ""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.10"", ""Programming Language :: Python :: 3.7"", ""Programming Language :: Python :: 3.8"", ""Programming Language :: Python :: 3.9"", ""Topic :: Database""]"," [![PyPI](https://img.shields.io/pypi/v/datasette.svg)](https://pypi.org/project/datasette/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette?label=changelog)](https://docs.datasette.io/en/stable/changelog.html) [![Python 3.x](https://img.shields.io/pypi/pyversions/datasette.svg?logo=python&logoColor=white)](https://pypi.org/project/datasette/) [![Tests](https://github.com/simonw/datasette/workflows/Test/badge.svg)](https://github.com/simonw/datasette/actions?query=workflow%3ATest) [![Documentation Status](https://readthedocs.org/projects/datasette/badge/?version=latest)](https://docs.datasette.io/en/latest/?badge=latest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette/blob/main/LICENSE) [![docker: datasette](https://img.shields.io/badge/docker-datasette-blue)](https://hub.docker.com/r/datasetteproject/datasette) [![discord](https://img.shields.io/discord/823971286308356157?label=discord)](https://discord.gg/ktd74dm5mw) *An open source multi-tool for exploring and publishing data* Datasette is a tool for exploring and publishing data. It helps people take data of any shape or size and publish that as an interactive, explorable website and accompanying API. Datasette is aimed at data journalists, museum curators, archivists, local governments, scientists, researchers and anyone else who has data that they wish to share with the world. [Explore a demo](https://global-power-plants.datasettes.com/global-power-plants/global-power-plants), watch [a video about the project](https://simonwillison.net/2021/Feb/7/video/) or try it out by [uploading and publishing your own CSV data](https://docs.datasette.io/en/stable/getting_started.html#try-datasette-without-installing-anything-using-glitch). * [datasette.io](https://datasette.io/) is the official project website * Latest [Datasette News](https://datasette.io/news) * Comprehensive documentation: https://docs.datasette.io/ * Examples: https://datasette.io/examples * Live demo of current `main` branch: https://latest.datasette.io/ * Questions, feedback or want to talk about the project? Join our [Discord](https://discord.gg/ktd74dm5mw) Want to stay up-to-date with the project? Subscribe to the [Datasette newsletter](https://datasette.substack.com/) for tips, tricks and news on what's new in the Datasette ecosystem. ## Installation If you are on a Mac, [Homebrew](https://brew.sh/) is the easiest way to install Datasette: brew install datasette You can also install it using `pip` or `pipx`: pip install datasette Datasette requires Python 3.7 or higher. We also have [detailed installation instructions](https://docs.datasette.io/en/stable/installation.html) covering other options such as Docker. ## Basic usage datasette serve path/to/database.db This will start a web server on port 8001 - visit http://localhost:8001/ to access the web interface. `serve` is the default subcommand, you can omit it if you like. Use Chrome on OS X? You can run datasette against your browser history like so: datasette ~/Library/Application\ Support/Google/Chrome/Default/History --nolock Now visiting http://localhost:8001/History/downloads will show you a web interface to browse your downloads data: ![Downloads table rendered by datasette](https://static.simonwillison.net/static/2017/datasette-downloads.png) ## metadata.json If you want to include licensing and source information in the generated datasette website you can do so using a JSON file that looks something like this: { ""title"": ""Five Thirty Eight"", ""license"": ""CC Attribution 4.0 License"", ""license_url"": ""http://creativecommons.org/licenses/by/4.0/"", ""source"": ""fivethirtyeight/data on GitHub"", ""source_url"": ""https://github.com/fivethirtyeight/data"" } Save this in `metadata.json` and run Datasette like so: datasette serve fivethirtyeight.db -m metadata.json The license and source information will be displayed on the index page and in the footer. They will also be included in the JSON produced by the API. ## datasette publish If you have [Heroku](https://heroku.com/) or [Google Cloud Run](https://cloud.google.com/run/) configured, Datasette can deploy one or more SQLite databases to the internet with a single command: datasette publish heroku database.db Or: datasette publish cloudrun database.db This will create a docker image containing both the datasette application and the specified SQLite database files. It will then deploy that image to Heroku or Cloud Run and give you a URL to access the resulting website and API. See [Publishing data](https://docs.datasette.io/en/stable/publish.html) in the documentation for more details. ## Datasette Lite [Datasette Lite](https://lite.datasette.io/) is Datasette packaged using WebAssembly so that it runs entirely in your browser, no Python web application server required. Read more about that in the [Datasette Lite documentation](https://github.com/simonw/datasette-lite/blob/main/README.md). ",Simon Willison,,text/markdown,https://datasette.io/,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette/,,https://pypi.org/project/datasette/,"{""CI"": ""https://github.com/simonw/datasette/actions?query=workflow%3ATest"", ""Changelog"": ""https://docs.datasette.io/en/stable/changelog.html"", ""Documentation"": ""https://docs.datasette.io/en/stable/"", ""Homepage"": ""https://datasette.io/"", ""Issues"": ""https://github.com/simonw/datasette/issues"", ""Live demo"": ""https://latest.datasette.io/"", ""Source code"": ""https://github.com/simonw/datasette""}",https://pypi.org/project/datasette/0.63.1/,"[""asgiref (>=3.2.10)"", ""click (>=7.1.1)"", ""click-default-group-wheel (>=1.2.2)"", ""Jinja2 (>=2.10.3)"", ""hupper (>=1.9)"", ""httpx (>=0.20)"", ""pint (>=0.9)"", ""pluggy (>=1.0)"", ""uvicorn (>=0.11)"", ""aiofiles (>=0.4)"", ""janus (>=0.6.2)"", ""asgi-csrf (>=0.9)"", ""PyYAML (>=5.3)"", ""mergedeep (>=1.1.1)"", ""itsdangerous (>=1.1)"", ""furo (==2022.9.29) ; extra == 'docs'"", ""sphinx-autobuild ; extra == 'docs'"", ""codespell ; extra == 'docs'"", ""blacken-docs ; extra == 'docs'"", ""sphinx-copybutton ; extra == 'docs'"", ""rich ; extra == 'rich'"", ""pytest (>=5.2.2) ; extra == 'test'"", ""pytest-xdist (>=2.2.1) ; extra == 'test'"", ""pytest-asyncio (>=0.17) ; extra == 'test'"", ""beautifulsoup4 (>=4.8.1) ; extra == 'test'"", ""black (==22.10.0) ; extra == 'test'"", ""blacken-docs (==1.12.1) ; extra == 'test'"", ""pytest-timeout (>=1.4.2) ; extra == 'test'"", ""trustme (>=0.7) ; extra == 'test'"", ""cogapp (>=3.3.0) ; extra == 'test'""]",>=3.7,0.63.1,0, datasette-auth0,Datasette plugin that authenticates users using Auth0,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-auth0 [![PyPI](https://img.shields.io/pypi/v/datasette-auth0.svg)](https://pypi.org/project/datasette-auth0/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-auth0?include_prereleases&label=changelog)](https://github.com/simonw/datasette-auth0/releases) [![Tests](https://github.com/simonw/datasette-auth0/workflows/Test/badge.svg)](https://github.com/simonw/datasette-auth0/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-auth0/blob/main/LICENSE) Datasette plugin that authenticates users using [Auth0](https://auth0.com/) See [Simplest possible OAuth authentication with Auth0](https://til.simonwillison.net/auth0/oauth-with-auth0) for more about how this plugin works. ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-auth0 ## Demo You can try this out at [datasette-auth0-demo.datasette.io](https://datasette-auth0-demo.datasette.io/) - click on the top right menu icon and select ""Sign in with Auth0"". ## Initial configuration First, create a new application in Auth0. You will need the domain, client ID and client secret for that application. The domain should be something like `mysite.us.auth0.com`. Add `http://127.0.0.1:8001/-/auth0-callback` to the list of Allowed Callback URLs. Then configure these plugin secrets using `metadata.yml`: ```yaml plugins: datasette-auth0: domain: ""$env"": AUTH0_DOMAIN client_id: ""$env"": AUTH0_CLIENT_ID client_secret: ""$env"": AUTH0_CLIENT_SECRET ``` Only the `client_secret` needs to be kept secret, but for consistency I recommend using the `$env` mechanism for all three. In development, you can run Datasette and pass in environment variables like this: ``` AUTH0_DOMAIN=""your-domain.us.auth0.com"" \ AUTH0_CLIENT_ID=""...client-id-goes-here..."" \ AUTH0_CLIENT_SECRET=""...secret-goes-here..."" \ datasette -m metadata.yml ``` If you are deploying using `datasette publish` you can pass these using `--plugin-secret`. For example, to deploy using Cloud Run you might run the following: ``` datasette publish cloudrun mydatabase.db \ --install datasette-auth0 \ --plugin-secret datasette-auth0 domain ""your-domain.us.auth0.com"" \ --plugin-secret datasette-auth0 client_id ""your-client-id"" \ --plugin-secret datasette-auth0 client_secret ""your-client-secret"" \ --service datasette-auth0-demo ``` Once your Datasette instance is deployed, you will need to add its callback URL to the ""Allowed Callback URLs"" list in Auth0. The callback URL should be something like: https://url-to-your-datasette/-/auth0-callback ## Usage Once installed, a ""Sign in with Auth0"" menu item will appear in the Datasette main menu. You can sign in and then visit the `/-/actor` page to see full details of the `auth0` profile that has been authenticated. You can then use [Datasette permissions](https://docs.datasette.io/en/stable/authentication.html#configuring-permissions-in-metadata-json) to grant or deny access to different parts of Datasette based on the authenticated user. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-auth0 python3 -mvenv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-auth0,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-auth0/,,https://pypi.org/project/datasette-auth0/,"{""CI"": ""https://github.com/simonw/datasette-auth0/actions"", ""Changelog"": ""https://github.com/simonw/datasette-auth0/releases"", ""Homepage"": ""https://github.com/simonw/datasette-auth0"", ""Issues"": ""https://github.com/simonw/datasette-auth0/issues""}",https://pypi.org/project/datasette-auth0/0.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""pytest-httpx ; extra == 'test'""]",>=3.7,0.1,0, datasette-cluster-map,Datasette plugin that shows a map for any data with latitude/longitude columns,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-cluster-map [![PyPI](https://img.shields.io/pypi/v/datasette-cluster-map.svg)](https://pypi.org/project/datasette-cluster-map/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-cluster-map?include_prereleases&label=changelog)](https://github.com/simonw/datasette-cluster-map/releases) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-cluster-map/blob/main/LICENSE) A [Datasette plugin](https://docs.datasette.io/en/stable/plugins.html) that detects tables with `latitude` and `longitude` columns and then plots them on a map using [Leaflet.markercluster](https://github.com/Leaflet/Leaflet.markercluster). More about this project: [Datasette plugins, and building a clustered map visualization](https://simonwillison.net/2018/Apr/20/datasette-plugins/). ## Demo [global-power-plants.datasettes.com](https://global-power-plants.datasettes.com/global-power-plants/global-power-plants) hosts a demo of this plugin running against a database of 33,000 power plants around the world. ![Cluster map demo](https://static.simonwillison.net/static/2020/global-power-plants.png) ## Installation Run `datasette install datasette-cluster-map` to add this plugin to your Datasette virtual environment. Datasette will automatically load the plugin if it is installed in this way. If you are deploying using the `datasette publish` command you can use the `--install` option: datasette publish cloudrun mydb.db --install=datasette-cluster-map If any of your tables have a `latitude` and `longitude` column, a map will be automatically displayed. ## Configuration If your columns are called something else you can configure the column names using [plugin configuration](https://docs.datasette.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file. For example, if all of your columns are called `xlat` and `xlng` you can create a `metadata.json` file like this: ```json { ""title"": ""Regular metadata keys can go here too"", ""plugins"": { ""datasette-cluster-map"": { ""latitude_column"": ""xlat"", ""longitude_column"": ""xlng"" } } } ``` Then run Datasette like this: datasette mydata.db -m metadata.json This will configure the required column names for every database loaded by that Datasette instance. If you want to customize the column names for just one table in one database, you can do something like this: ```json { ""databases"": { ""polar-bears"": { ""tables"": { ""USGS_WC_eartag_deployments_2009-2011"": { ""plugins"": { ""datasette-cluster-map"": { ""latitude_column"": ""Capture Latitude"", ""longitude_column"": ""Capture Longitude"" } } } } } } } ``` You can also use a custom SQL query to rename those columns to `latitude` and `longitude`, [for example](https://polar-bears.now.sh/polar-bears?sql=select+*%2C%0D%0A++++%22Capture+Latitude%22+as+latitude%2C%0D%0A++++%22Capture+Longitude%22+as+longitude%0D%0Afrom+%5BUSGS_WC_eartag_deployments_2009-2011%5D): ```sql select *, ""Capture Latitude"" as latitude, ""Capture Longitude"" as longitude from [USGS_WC_eartag_deployments_2009-2011] ``` The map defaults to being displayed above the main results table on the page. You can use the `""container""` plugin setting to provide a CSS selector indicating an element that the map should be appended to instead. ## Custom tile layers You can customize the tile layer used by the maps using the `tile_layer` and `tile_layer_options` configuration settings. For example, to use the [Stamen Watercolor tiles](http://maps.stamen.com/watercolor/#12/37.7706/-122.3782) you can use these settings: ```json { ""plugins"": { ""datasette-cluster-map"": { ""tile_layer"": ""https://stamen-tiles-{s}.a.ssl.fastly.net/watercolor/{z}/{x}/{y}.{ext}"", ""tile_layer_options"": { ""attribution"": ""Map tiles by Stamen Design, CC BY 3.0 — Map data © OpenStreetMap contributors"", ""subdomains"": ""abcd"", ""minZoom"": 1, ""maxZoom"": 16, ""ext"": ""jpg"" } } } } ``` The [Leaflet Providers preview list](https://leaflet-extras.github.io/leaflet-providers/preview/index.html) has details of many other tile layers you can use. ## Custom marker popups The marker popup defaults to displaying the data for the underlying database row. You can customize this by including a `popup` column in your results containing JSON that defines a more useful popup. The JSON in the popup column should look something like this: ```json { ""image"": ""https://niche-museums.imgix.net/dodgems.heic?w=800&h=400&fit=crop"", ""alt"": ""Dingles Fairground Heritage Centre"", ""title"": ""Dingles Fairground Heritage Centre"", ""description"": ""Home of the National Fairground Collection, Dingles has over 45,000 indoor square feet of vintage fairground rides... and you can go on them! Highlights include the last complete surviving and opera"", ""link"": ""/browse/museums/26"" } ``` Each of these columns is optional. - `title` is the title to show at the top of the popup - `image` is the URL to an image to display in the popup - `alt` is the alt attribute to use for that image - `description` is a longer string of text to use as a description - `link` is a URL that the marker content should link to You can use the SQLite `json_object()` function to construct this data dynamically as part of your SQL query. Here's an example: ```sql select json_object( 'image', photo_url || '?w=800&h=400&fit=crop', 'title', name, 'description', substr(description, 0, 200), 'link', '/browse/museums/' || id ) as popup, latitude, longitude from museums where id in (26, 27) order by id ``` [Try that example here](https://www.niche-museums.com/browse?sql=select+json_object%28%0D%0A++%27image%27%2C+photo_url+%7C%7C+%27%3Fw%3D800%26h%3D400%26fit%3Dcrop%27%2C%0D%0A++%27title%27%2C+name%2C%0D%0A++%27description%27%2C+substr%28description%2C+0%2C+200%29%2C%0D%0A++%27link%27%2C+%27%2Fbrowse%2Fmuseums%2F%27+%7C%7C+id%0D%0A++%29+as+popup%2C%0D%0A++latitude%2C+longitude+from+museums) or take a look at [this demo built using a SQL view](https://dogsheep-photos.dogsheep.net/public/photos_on_a_map). ## How I deployed the demo datasette publish cloudrun global-power-plants.db \ --service global-power-plants \ --metadata metadata.json \ --install=datasette-cluster-map \ --extra-options=""--config facet_time_limit_ms:1000"" ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-cluster-map python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-cluster-map,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-cluster-map/,,https://pypi.org/project/datasette-cluster-map/,"{""CI"": ""https://github.com/simonw/datasette-cluster-map/actions"", ""Changelog"": ""https://github.com/simonw/datasette-cluster-map/releases"", ""Homepage"": ""https://github.com/simonw/datasette-cluster-map"", ""Issues"": ""https://github.com/simonw/datasette-cluster-map/issues""}",https://pypi.org/project/datasette-cluster-map/0.17.2/,"[""datasette (>=0.54)"", ""datasette-leaflet (>=0.2.2)"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""httpx ; extra == 'test'"", ""sqlite-utils ; extra == 'test'""]",,0.17.2,0, datasette-copy-to-memory,Copy database files into an in-memory database on startup,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-copy-to-memory [![PyPI](https://img.shields.io/pypi/v/datasette-copy-to-memory.svg)](https://pypi.org/project/datasette-copy-to-memory/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-copy-to-memory?include_prereleases&label=changelog)](https://github.com/simonw/datasette-copy-to-memory/releases) [![Tests](https://github.com/simonw/datasette-copy-to-memory/workflows/Test/badge.svg)](https://github.com/simonw/datasette-copy-to-memory/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-copy-to-memory/blob/main/LICENSE) Copy database files into an in-memory database on startup This plugin is **highly experimental**. It currently exists to support Datasette performance research, and is not designed for actual production usage. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-copy-to-memory ## Usage On startup, Datasette will create an in-memory named database for each attached database. This database will have the same name but with `_memory` at the end. So running this: datasette fixtures.db Will serve two databases: the original at `/fixtures` and the in-memory copy at `/fixtures_memory`. ## Demo A demo is running on [latest-with-plugins.datasette.io](https://latest-with-plugins.datasette.io/) - the [/fixtures_memory](https://latest-with-plugins.datasette.io/fixtures_memory) table there is provided by this plugin. ## Configuration By default every attached database file will be loaded into a `_memory` copy. You can use plugin configuration to specify just a subset of the database. For example, to create `github_memory` but not `fixtures_memory` you would use the following `metadata.yml` file: ```yaml plugins: datasette-copy-to-memory: databases: - github ``` Then start Datasette like this: datasette github.db fixtures.db -m metadata.yml If you don't want to have a `fixtures` and `fixtures_memory` database, you can use `replace: true` to have the plugin replace the file-backed database with the new in-memory one, reusing the same database name: ```yaml plugins: datasette-copy-to-memory: replace: true ``` Then: datasette github.db fixtures.db -m metadata.yml This will result in both `/github` and `/fixtures` but no `/github_memory` or `/fixtures_memory`. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-copy-to-memory python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-copy-to-memory,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-copy-to-memory/,,https://pypi.org/project/datasette-copy-to-memory/,"{""CI"": ""https://github.com/simonw/datasette-copy-to-memory/actions"", ""Changelog"": ""https://github.com/simonw/datasette-copy-to-memory/releases"", ""Homepage"": ""https://github.com/simonw/datasette-copy-to-memory"", ""Issues"": ""https://github.com/simonw/datasette-copy-to-memory/issues""}",https://pypi.org/project/datasette-copy-to-memory/0.2/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""sqlite-utils ; extra == 'test'""]",>=3.7,0.2,0, datasette-expose-env,Datasette plugin to expose selected environment variables at /-/env for debugging,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-expose-env [![PyPI](https://img.shields.io/pypi/v/datasette-expose-env.svg)](https://pypi.org/project/datasette-expose-env/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-expose-env?include_prereleases&label=changelog)](https://github.com/simonw/datasette-expose-env/releases) [![Tests](https://github.com/simonw/datasette-expose-env/workflows/Test/badge.svg)](https://github.com/simonw/datasette-expose-env/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-expose-env/blob/main/LICENSE) Datasette plugin to expose selected environment variables at `/-/env` for debugging ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-expose-env ## Configuration Decide on a list of environment variables you would like to expose, then add the following to your `metadata.yml` configuration: ```yaml plugins: datasette-expose-env: - ENV_VAR_1 - ENV_VAR_2 - ENV_VAR_3 ``` If you are using JSON in a `metadata.json` file use the following: ```json { ""plugins"": { ""datasette-expose-env"": [ ""ENV_VAR_1"", ""ENV_VAR_2"", ""ENV_VAR_3"" ] } } ``` Visit `/-/env` on your Datasette instance to see the values of the environment variables. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-expose-env python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-expose-env,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-expose-env/,,https://pypi.org/project/datasette-expose-env/,"{""CI"": ""https://github.com/simonw/datasette-expose-env/actions"", ""Changelog"": ""https://github.com/simonw/datasette-expose-env/releases"", ""Homepage"": ""https://github.com/simonw/datasette-expose-env"", ""Issues"": ""https://github.com/simonw/datasette-expose-env/issues""}",https://pypi.org/project/datasette-expose-env/0.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.1,0, datasette-external-links-new-tabs,Datasette plugin to open external links in new tabs,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-external-links-new-tabs [![PyPI](https://img.shields.io/pypi/v/datasette-external-links-new-tabs.svg)](https://pypi.org/project/datasette-external-links-new-tabs/) [![Changelog](https://img.shields.io/github/v/release/ocdtrekkie/datasette-external-links-new-tabs?include_prereleases&label=changelog)](https://github.com/ocdtrekkie/datasette-external-links-new-tabs/releases) [![Tests](https://github.com/ocdtrekkie/datasette-external-links-new-tabs/workflows/Test/badge.svg)](https://github.com/ocdtrekkie/datasette-external-links-new-tabs/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/ocdtrekkie/datasette-external-links-new-tabs/blob/main/LICENSE) Datasette plugin to open external links in new tabs ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-external-links-new-tabs ## Usage There are no usage instructions, it simply opens external links in a new tab. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-external-links-new-tabs python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Jacob Weisz,,text/markdown,https://github.com/ocdtrekkie/datasette-external-links-new-tabs,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-external-links-new-tabs/,,https://pypi.org/project/datasette-external-links-new-tabs/,"{""CI"": ""https://github.com/ocdtrekkie/datasette-external-links-new-tabs/actions"", ""Changelog"": ""https://github.com/ocdtrekkie/datasette-external-links-new-tabs/releases"", ""Homepage"": ""https://github.com/ocdtrekkie/datasette-external-links-new-tabs"", ""Issues"": ""https://github.com/ocdtrekkie/datasette-external-links-new-tabs/issues""}",https://pypi.org/project/datasette-external-links-new-tabs/0.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.1,0, datasette-gunicorn,Run a Datasette server using Gunicorn,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-gunicorn [![PyPI](https://img.shields.io/pypi/v/datasette-gunicorn.svg)](https://pypi.org/project/datasette-gunicorn/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-gunicorn?include_prereleases&label=changelog)](https://github.com/simonw/datasette-gunicorn/releases) [![Tests](https://github.com/simonw/datasette-gunicorn/workflows/Test/badge.svg)](https://github.com/simonw/datasette-gunicorn/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-gunicorn/blob/main/LICENSE) Run a [Datasette](https://datasette.io/) server using [Gunicorn](https://gunicorn.org/) ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-gunicorn ## Usage The plugin adds a new `datasette gunicorn` command. This takes most of the same options as `datasette serve`, plus one more option for setting the number of Gunicorn workers to start: `-w/--workers X` - set the number of workers. Defaults to 1. To start serving a database using 4 workers, run the following: datasette gunicorn fixtures.db -w 4 It is advisable to switch your datasette [into WAL mode](https://til.simonwillison.net/sqlite/enabling-wal-mode) to get the best performance out of this configuration: sqlite3 fixtures.db 'PRAGMA journal_mode=WAL;' Run `datasette gunicorn --help` for a full list of options (which are the same as `datasette serve --help`, with the addition of the new `-w` option). ## datasette gunicorn --help Not all of the options to `datasette serve` are supported. Here's the full list of available options: ``` Usage: datasette gunicorn [OPTIONS] [FILES]... Start a Gunicorn server running to serve Datasette Options: -i, --immutable PATH Database files to open in immutable mode -h, --host TEXT Host for server. Defaults to 127.0.0.1 which means only connections from the local machine will be allowed. Use 0.0.0.0 to listen to all IPs and allow access from other machines. -p, --port INTEGER RANGE Port for server, defaults to 8001. Use -p 0 to automatically assign an available port. [0<=x<=65535] --cors Enable CORS by serving Access-Control-Allow-Origin: * --load-extension TEXT Path to a SQLite extension to load --inspect-file TEXT Path to JSON file created using ""datasette inspect"" -m, --metadata FILENAME Path to JSON/YAML file containing license/source metadata --template-dir DIRECTORY Path to directory containing custom templates --plugins-dir DIRECTORY Path to directory containing custom plugins --static MOUNT:DIRECTORY Serve static files from this directory at /MOUNT/... --memory Make /_memory database available --config CONFIG Deprecated: set config option using configname:value. Use --setting instead. --setting SETTING... Setting, see docs.datasette.io/en/stable/settings.html --secret TEXT Secret used for signing secure values, such as signed cookies --version-note TEXT Additional note to show on /-/versions --help-settings Show available settings --create Create database files if they do not exist --crossdb Enable cross-database joins using the /_memory database --nolock Ignore locking, open locked files in read-only mode -w, --workers INTEGER Number of Gunicorn workers [default: 1] --help Show this message and exit. ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-gunicorn python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-gunicorn,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-gunicorn/,,https://pypi.org/project/datasette-gunicorn/,"{""CI"": ""https://github.com/simonw/datasette-gunicorn/actions"", ""Changelog"": ""https://github.com/simonw/datasette-gunicorn/releases"", ""Homepage"": ""https://github.com/simonw/datasette-gunicorn"", ""Issues"": ""https://github.com/simonw/datasette-gunicorn/issues""}",https://pypi.org/project/datasette-gunicorn/0.1/,"[""datasette"", ""gunicorn"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""cogapp ; extra == 'test'""]",>=3.7,0.1,0, datasette-gzip,Add gzip compression to Datasette,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-gzip [![PyPI](https://img.shields.io/pypi/v/datasette-gzip.svg)](https://pypi.org/project/datasette-gzip/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-gzip?include_prereleases&label=changelog)](https://github.com/simonw/datasette-gzip/releases) [![Tests](https://github.com/simonw/datasette-gzip/workflows/Test/badge.svg)](https://github.com/simonw/datasette-gzip/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-gzip/blob/main/LICENSE) Add gzip compression to Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-gzip ## Usage Once installed, Datasette will obey the `Accept-Encoding:` header sent by browsers or other user agents and return content compressed in the most appropriate way. This plugin is a thin wrapper for the [asgi-gzip library](https://github.com/simonw/asgi-gzip), which extracts the [GzipMiddleware](https://www.starlette.io/middleware/#gzipmiddleware) from Starlette. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-gzip python3 -mvenv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-gzip,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-gzip/,,https://pypi.org/project/datasette-gzip/,"{""CI"": ""https://github.com/simonw/datasette-gzip/actions"", ""Changelog"": ""https://github.com/simonw/datasette-gzip/releases"", ""Homepage"": ""https://github.com/simonw/datasette-gzip"", ""Issues"": ""https://github.com/simonw/datasette-gzip/issues""}",https://pypi.org/project/datasette-gzip/0.2/,"[""datasette"", ""asgi-gzip"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.2,0, datasette-hashed-urls,Optimize Datasette performance behind a caching proxy,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-hashed-urls [![PyPI](https://img.shields.io/pypi/v/datasette-hashed-urls.svg)](https://pypi.org/project/datasette-hashed-urls/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-hashed-urls?include_prereleases&label=changelog)](https://github.com/simonw/datasette-hashed-urls/releases) [![Tests](https://github.com/simonw/datasette-hashed-urls/workflows/Test/badge.svg)](https://github.com/simonw/datasette-hashed-urls/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-hashed-urls/blob/main/LICENSE) Optimize Datasette performance behind a caching proxy When you open a database file in immutable mode using the `-i` option, Datasette calculates a SHA-256 hash of the contents of that file on startup. This content hash can then optionally be used to create URLs that are guaranteed to change if the contents of the file changes in the future. The result is pages that can be cached indefinitely by both browsers and caching proxies - providing a significant performance boost. ## Demo A demo of this plugin is running at https://datasette-hashed-urls.vercel.app/ ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-hashed-urls ## Usage Once installed, this plugin will act on any immutable database files that are loaded into Datasette: datasette -i fixtures.db The database will automatically be renamed to incorporate a hash of the contents of the SQLite file - so the above database would be served as: http://127.0.0.1:8001/fixtures-aa7318b Every page that accesss that database, including JSON endpoints, will be served with the following far-future cache expiry header: cache-control: max-age=31536000, public Here `max-age=31536000` is the number of seconds in a year. A caching proxy such as Cloudflare can then be used to cache and accelerate content served by Datasette. When the database file is updated and the server is restarted, the hash will change and content will be served from a new URL. Any hits to the previous hashed URLs will be automatically redirected. If you run Datasette using the `--crossdb` option to enable [cross-database queries](https://docs.datasette.io/en/stable/sql_queries.html#cross-database-queries) the `_memory` database will also have a hash added to its URL - in this case, the hash will be a combination of the hashes of the other attached databases. ## Configuration You can use the `max_age` plugin configuration setting to change the cache duration specified in the `cache-control` HTTP header. To set the cache expiry time to one hour you would add this to your Datasette `metadata.json` configuration file: ```json { ""plugins"": { ""datasette-hashed-urls"": { ""max_age"": 3600 } } } ``` ## History This functionality used to ship as part of Datasette itself, as a feature called [Hashed URL mode](https://docs.datasette.io/en/0.60.2/performance.html#hashed-url-mode). That feature has been deprecated and will be removed in Datasette 1.0. This plugin should be used as an alternative. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-hashed-urls python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-hashed-urls,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-hashed-urls/,,https://pypi.org/project/datasette-hashed-urls/,"{""CI"": ""https://github.com/simonw/datasette-hashed-urls/actions"", ""Changelog"": ""https://github.com/simonw/datasette-hashed-urls/releases"", ""Homepage"": ""https://github.com/simonw/datasette-hashed-urls"", ""Issues"": ""https://github.com/simonw/datasette-hashed-urls/issues""}",https://pypi.org/project/datasette-hashed-urls/0.4/,"[""datasette (>=0.61.1)"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""sqlite-utils ; extra == 'test'""]",>=3.7,0.4,0, datasette-hovercards,Add preview hovercards to links in Datasette,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-hovercards [![PyPI](https://img.shields.io/pypi/v/datasette-hovercards.svg)](https://pypi.org/project/datasette-hovercards/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-hovercards?include_prereleases&label=changelog)](https://github.com/simonw/datasette-hovercards/releases) [![Tests](https://github.com/simonw/datasette-hovercards/workflows/Test/badge.svg)](https://github.com/simonw/datasette-hovercards/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-hovercards/blob/main/LICENSE) Add preview hovercards to links in Datasette ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-hovercards ## Usage Once installed, hovering over a link to a row within the Datasette interface - for example a foreign key reference on the table page - should show a hovercard with a preview of that row. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-hovercards python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-hovercards,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-hovercards/,,https://pypi.org/project/datasette-hovercards/,"{""CI"": ""https://github.com/simonw/datasette-hovercards/actions"", ""Changelog"": ""https://github.com/simonw/datasette-hovercards/releases"", ""Homepage"": ""https://github.com/simonw/datasette-hovercards"", ""Issues"": ""https://github.com/simonw/datasette-hovercards/issues""}",https://pypi.org/project/datasette-hovercards/0.1a0/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.6,0.1a0,0, datasette-ics,Datasette plugin for outputting iCalendar files,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-ics [![PyPI](https://img.shields.io/pypi/v/datasette-ics.svg)](https://pypi.org/project/datasette-ics/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-ics?include_prereleases&label=changelog)](https://github.com/simonw/datasette-ics/releases) [![Tests](https://github.com/simonw/datasette-ics/workflows/Test/badge.svg)](https://github.com/simonw/datasette-ics/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-ics/blob/main/LICENSE) Datasette plugin that adds support for generating [iCalendar .ics files](https://tools.ietf.org/html/rfc5545) with the results of a SQL query. ## Installation Install this plugin in the same environment as Datasette to enable the `.ics` output extension. $ pip install datasette-ics ## Usage To create an iCalendar file you need to define a custom SQL query that returns a required set of columns: * `event_name` - the short name for the event * `event_dtstart` - when the event starts The following columns are optional: * `event_dtend` - when the event ends * `event_duration` - the duration of the event (use instead of `dtend`) * `event_description` - a longer description of the event * `event_uid` - a globally unique identifier for this event * `event_tzid` - the timezone for the event, e.g. `America/Chicago` A query that returns these columns can then be returned as an ics feed by adding the `.ics` extension. ## Demo [This SQL query]([https://www.rockybeaches.com/data?sql=with+inner+as+(%0D%0A++select%0D%0A++++datetime%2C%0D%0A++++substr(datetime%2C+0%2C+11)+as+date%2C%0D%0A++++mllw_feet%2C%0D%0A++++lag(mllw_feet)+over+win+as+previous_mllw_feet%2C%0D%0A++++lead(mllw_feet)+over+win+as+next_mllw_feet%0D%0A++from%0D%0A++++tide_predictions%0D%0A++where%0D%0A++++station_id+%3D+%3Astation_id%0D%0A++++and+datetime+%3E%3D+date()%0D%0A++++window+win+as+(%0D%0A++++++order+by%0D%0A++++++++datetime%0D%0A++++)%0D%0A++order+by%0D%0A++++datetime%0D%0A)%2C%0D%0Alowest_tide_per_day+as+(%0D%0A++select%0D%0A++++date%2C%0D%0A++++datetime%2C%0D%0A++++mllw_feet%0D%0A++from%0D%0A++++inner%0D%0A++where%0D%0A++++mllw_feet+%3C%3D+previous_mllw_feet%0D%0A++++and+mllw_feet+%3C%3D+next_mllw_feet%0D%0A)%0D%0Aselect%0D%0A++min(datetime)+as+event_dtstart%2C%0D%0A++%27Low+tide%3A+%27+||+mllw_feet+||+%27+feet%27+as+event_name%2C%0D%0A++%27America%2FLos_Angeles%27+as+event_tzid%0D%0Afrom%0D%0A++lowest_tide_per_day%0D%0Agroup+by%0D%0A++date%0D%0Aorder+by%0D%0A++date&station_id=9414131) calculates the lowest tide per day at Pillar Point in Half Moon Bay, California. Since the query returns `event_name`, `event_dtstart` and `event_tzid` columns it produces [this ICS feed](https://www.rockybeaches.com/data.ics?sql=with+inner+as+(%0D%0A++select%0D%0A++++datetime%2C%0D%0A++++substr(datetime%2C+0%2C+11)+as+date%2C%0D%0A++++mllw_feet%2C%0D%0A++++lag(mllw_feet)+over+win+as+previous_mllw_feet%2C%0D%0A++++lead(mllw_feet)+over+win+as+next_mllw_feet%0D%0A++from%0D%0A++++tide_predictions%0D%0A++where%0D%0A++++station_id+%3D+%3Astation_id%0D%0A++++and+datetime+%3E%3D+date()%0D%0A++++window+win+as+(%0D%0A++++++order+by%0D%0A++++++++datetime%0D%0A++++)%0D%0A++order+by%0D%0A++++datetime%0D%0A)%2C%0D%0Alowest_tide_per_day+as+(%0D%0A++select%0D%0A++++date%2C%0D%0A++++datetime%2C%0D%0A++++mllw_feet%0D%0A++from%0D%0A++++inner%0D%0A++where%0D%0A++++mllw_feet+%3C%3D+previous_mllw_feet%0D%0A++++and+mllw_feet+%3C%3D+next_mllw_feet%0D%0A)%0D%0Aselect%0D%0A++min(datetime)+as+event_dtstart%2C%0D%0A++%27Low+tide%3A+%27+||+mllw_feet+||+%27+feet%27+as+event_name%2C%0D%0A++%27America%2FLos_Angeles%27+as+event_tzid%0D%0Afrom%0D%0A++lowest_tide_per_day%0D%0Agroup+by%0D%0A++date%0D%0Aorder+by%0D%0A++date&station_id=9414131). If you subscribe to that in a calendar application such as Apple Calendar you get something that looks like this: ![Apple Calendar showing low tides at Pillar Point during a week](https://user-images.githubusercontent.com/9599/173158984-e5ec6bd0-33fc-4fc0-ba9d-17ae674f310a.jpg) ## Using a canned query Datasette's [canned query mechanism](https://datasette.readthedocs.io/en/stable/sql_queries.html#canned-queries) can be used to configure calendars. If a canned query definition has a `title` that will be used as the title of the calendar. Here's an example, defined using a `metadata.yaml` file: ```yaml databases: mydatabase: queries: calendar: title: My Calendar sql: |- select title as event_name, start as event_dtstart, description as event_description from events order by start limit 100 ``` This will result in a calendar feed at `http://localhost:8001/mydatabase/calendar.ics` ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-ics,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-ics/,,https://pypi.org/project/datasette-ics/,"{""CI"": ""https://github.com/simonw/datasette-ics/actions"", ""Changelog"": ""https://github.com/simonw/datasette-ics/releases"", ""Homepage"": ""https://github.com/simonw/datasette-ics"", ""Issues"": ""https://github.com/simonw/datasette-ics/issues""}",https://pypi.org/project/datasette-ics/0.5.2/,"[""datasette (>=0.49)"", ""ics (==0.7.2)"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",,0.5.2,0, datasette-mp3-audio,Turn .mp3 URLs into an audio player in the Datasette interface,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-mp3-audio [![PyPI](https://img.shields.io/pypi/v/datasette-mp3-audio.svg)](https://pypi.org/project/datasette-mp3-audio/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-mp3-audio?include_prereleases&label=changelog)](https://github.com/simonw/datasette-mp3-audio/releases) [![Tests](https://github.com/simonw/datasette-mp3-audio/workflows/Test/badge.svg)](https://github.com/simonw/datasette-mp3-audio/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-mp3-audio/blob/main/LICENSE) Turn .mp3 URLs into an audio player in the Datasette interface ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-mp3-audio ## Demo Try this plugin at [https://scotrail.datasette.io/scotrail/announcements](https://scotrail.datasette.io/scotrail/announcements) The demo uses ScotRail train announcements from [matteason/scotrail-announcements-june-2022](https://github.com/matteason/scotrail-announcements-june-2022). ## Usage Once installed, any cells with a value that ends in `.mp3` and starts with either `http://` or `/` or `https://` will be turned into an embedded HTML audio element like this: ```html ``` A ""Play X MP3s on this page"" button will be added to athe top of any table page listing more than one MP3. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-mp3-audio python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-mp3-audio,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-mp3-audio/,,https://pypi.org/project/datasette-mp3-audio/,"{""CI"": ""https://github.com/simonw/datasette-mp3-audio/actions"", ""Changelog"": ""https://github.com/simonw/datasette-mp3-audio/releases"", ""Homepage"": ""https://github.com/simonw/datasette-mp3-audio"", ""Issues"": ""https://github.com/simonw/datasette-mp3-audio/issues""}",https://pypi.org/project/datasette-mp3-audio/0.2/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""sqlite-utils ; extra == 'test'""]",>=3.7,0.2,0, datasette-multiline-links,Make multiple newline separated URLs clickable in Datasette,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-multiline-links [![PyPI](https://img.shields.io/pypi/v/datasette-multiline-links.svg)](https://pypi.org/project/datasette-multiline-links/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-multiline-links?include_prereleases&label=changelog)](https://github.com/simonw/datasette-multiline-links/releases) [![Tests](https://github.com/simonw/datasette-multiline-links/workflows/Test/badge.svg)](https://github.com/simonw/datasette-multiline-links/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-multiline-links/blob/main/LICENSE) Make multiple newline separated URLs clickable in Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-multiline-links ## Usage Once installed, if a cell has contents like this: ``` https://example.com Not a link https://google.com ``` It will be rendered as: ```html https://example.com Not a link https://google.com ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-multiline-links python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-multiline-links,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-multiline-links/,,https://pypi.org/project/datasette-multiline-links/,"{""CI"": ""https://github.com/simonw/datasette-multiline-links/actions"", ""Changelog"": ""https://github.com/simonw/datasette-multiline-links/releases"", ""Homepage"": ""https://github.com/simonw/datasette-multiline-links"", ""Issues"": ""https://github.com/simonw/datasette-multiline-links/issues""}",https://pypi.org/project/datasette-multiline-links/0.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.1,0, datasette-nteract-data-explorer,automatic visual data explorer for datasette,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-nteract-data-explorer [![PyPI](https://img.shields.io/pypi/v/datasette-nteract-data-explorer.svg)](https://pypi.org/project/datasette-nteract-data-explorer/) [![Changelog](https://img.shields.io/github/v/release/hydrosquall/datasette-nteract-data-explorer?include_prereleases&label=changelog)](https://github.com/hydrosquall/datasette-nteract-data-explorer/releases) [![Tests](https://github.com/hydrosquall/datasette-nteract-data-explorer/workflows/Test/badge.svg)](https://github.com/hydrosquall/datasette-nteract-data-explorer/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/hydrosquall/datasette-nteract-data-explorer/blob/main/LICENSE) An automatic data visualization plugin for the [Datasette](https://datasette.io/) ecosystem. See your dataset from multiple views with an easy-to-use, customizable menu-based interface. ## Demo Try the [live demo](https://datasette-nteract-data-explorer.vercel.app/happy_planet_index/hpi_cleaned?_size=137) ![screenshot](https://p-qkfgo2.t2.n0.cdn.getcloudapp.com/items/yAuK9LRE/6802f849-315d-4a21-93b4-61c94d066bdc.jpg?v=f1ceee5ed70832d74e745b6508baeffb) _Running Datasette with the Happy Planet Index dataset_ ## Installation Install this plugin in the same Python environment as Datasette. ```bash datasette install datasette-nteract-data-explorer ``` ## Usage - Click ""View in Data Explorer"" to expand the visualization panel - Click the icons on the right side to change the visualization type. - Use the menus underneath the graphing area to configure your graph (e.g. change which columns to graph, colors to use, etc) - Use ""advanced settings"" mode to override the inferred column types. For example, you may want to treat a number as a ""string"" to be able to use it as a category. - See a [live demo](https://data-explorer.nteract.io/) of the original Nteract data-explorer component used in isolation. You can run a minimal demo after the installation step ```bash datasette -i demo/happy_planet_index.db ``` If you're interested in improving the demo site, you can run a copy of the site the extra metadata/plugins used in the [published demo](https://datasette-nteract-data-explorer.vercel.app). ```bash make run-demo ``` Thank you for reading this far! If you use the Data Explorer in your own site and would like others to find it, you can [mention it here](https://github.com/hydrosquall/datasette-nteract-data-explorer/discussions/10). ## Development See [contributing docs](./docs/CONTRIBUTING.md). ## Acknowledgements - The [Data Explorer](https://github.com/nteract/data-explorer) was designed by Elijah Meeks. I co-maintain this project as part of the [Nteract](https://nteract.io/) open-source team. You can read about the design behind this tool [here](https://blog.nteract.io/designing-the-nteract-data-explorer-f4476d53f897) - The data model is based on the [Frictionless Data Spec](https://specs.frictionlessdata.io/). - This plugin was bootstrapped by Simon Willison's [Datasette plugin template](https://simonwillison.net/2020/Jun/20/cookiecutter-plugins/) - Demo dataset from the [Happy Planet Index](https://happyplanetindex.org/) was cleaned by Doris Lee. This dataset was chosen because of its global appeal, modest size, and variety in column datatypes (numbers, low cardinality and high cardinality strings, booleans). - Hosting for the demo site is provided by Vercel. [![site hosted by vercel](https://www.datocms-assets.com/31049/1618983297-powered-by-vercel.svg)](https://vercel.com/?utm_source=datasette-visualization-plugin-demos&utm_campaign=oss) ",Cameron Yick,,text/markdown,https://github.com/hydrosquall/datasette-nteract-data-explorer,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-nteract-data-explorer/,,https://pypi.org/project/datasette-nteract-data-explorer/,"{""CI"": ""https://github.com/hydrosquall/datasette-nteract-data-explorer/actions"", ""Changelog"": ""https://github.com/hydrosquall/datasette-nteract-data-explorer/releases"", ""Homepage"": ""https://github.com/hydrosquall/datasette-nteract-data-explorer"", ""Issues"": ""https://github.com/hydrosquall/datasette-nteract-data-explorer/issues""}",https://pypi.org/project/datasette-nteract-data-explorer/0.5.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.5.1,0, datasette-packages,Show a list of currently installed Python packages,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-packages [![PyPI](https://img.shields.io/pypi/v/datasette-packages.svg)](https://pypi.org/project/datasette-packages/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-packages?include_prereleases&label=changelog)](https://github.com/simonw/datasette-packages/releases) [![Tests](https://github.com/simonw/datasette-packages/workflows/Test/badge.svg)](https://github.com/simonw/datasette-packages/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-packages/blob/main/LICENSE) Show a list of currently installed Python packages ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-packages ## Usage Visit `/-/packages` to see a list of installed Python packages. Visit `/-/packages.json` to get that back as JSON. ## Demo The output of this plugin can be seen here: - https://latest-with-plugins.datasette.io/-/packages - https://latest-with-plugins.datasette.io/-/packages.json ## With datasette-graphql if you have version 2.1 or higher of the [datasette-graphql](https://datasette.io/plugins/datasette-graphql) plugin installed you can also query the list of packages using this GraphQL query: ```graphql { packages { name version } } ``` [Demo of this query](https://latest-with-plugins.datasette.io/graphql?query=%7B%0A%20%20%20%20packages%20%7B%0A%20%20%20%20%20%20%20%20name%0A%20%20%20%20%20%20%20%20version%0A%20%20%20%20%7D%0A%7D). ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-packages python3 -mvenv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-packages,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-packages/,,https://pypi.org/project/datasette-packages/,"{""CI"": ""https://github.com/simonw/datasette-packages/actions"", ""Changelog"": ""https://github.com/simonw/datasette-packages/releases"", ""Homepage"": ""https://github.com/simonw/datasette-packages"", ""Issues"": ""https://github.com/simonw/datasette-packages/issues""}",https://pypi.org/project/datasette-packages/0.2/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""datasette-graphql (>=2.1) ; extra == 'test'""]",>=3.7,0.2,0, datasette-pretty-traces,Prettier formatting for ?_trace=1 traces,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-pretty-traces [![PyPI](https://img.shields.io/pypi/v/datasette-pretty-traces.svg)](https://pypi.org/project/datasette-pretty-traces/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-pretty-traces?include_prereleases&label=changelog)](https://github.com/simonw/datasette-pretty-traces/releases) [![Tests](https://github.com/simonw/datasette-pretty-traces/workflows/Test/badge.svg)](https://github.com/simonw/datasette-pretty-traces/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-pretty-traces/blob/main/LICENSE) Prettier formatting for `?_trace=1` traces ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-pretty-traces ## Usage Once installed, run Datasette using `--setting trace_debug 1`: datasette fixtures.db --setting trace_debug 1 Then navigate to any page and add `?_trace=` to the URL: http://localhost:8001/?_trace=1 The plugin will scroll you down the page to the visualized trace information. ## Demo You can try out the demo here: - [/?_trace=1](https://latest-with-plugins.datasette.io/?_trace=1) tracing the homepage - [/github/commits?_trace=1](https://latest-with-plugins.datasette.io/github/commits?_trace=1) tracing a table page ## Screenshot ![Screenshot showing the visualization produced by the plugin](https://user-images.githubusercontent.com/9599/145883732-a53accdd-5feb-4629-94cd-f73407c7943d.png) ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-pretty-traces python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-pretty-traces,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-pretty-traces/,,https://pypi.org/project/datasette-pretty-traces/,"{""CI"": ""https://github.com/simonw/datasette-pretty-traces/actions"", ""Changelog"": ""https://github.com/simonw/datasette-pretty-traces/releases"", ""Homepage"": ""https://github.com/simonw/datasette-pretty-traces"", ""Issues"": ""https://github.com/simonw/datasette-pretty-traces/issues""}",https://pypi.org/project/datasette-pretty-traces/0.4/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.6,0.4,0, datasette-public,Make specific Datasette tables visible to the public,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-public [![PyPI](https://img.shields.io/pypi/v/datasette-public.svg)](https://pypi.org/project/datasette-public/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-public?include_prereleases&label=changelog)](https://github.com/simonw/datasette-public/releases) [![Tests](https://github.com/simonw/datasette-public/workflows/Test/badge.svg)](https://github.com/simonw/datasette-public/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-public/blob/main/LICENSE) Make specific Datasette tables visible to the public ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-public ## Usage Any tables listed in the `_public_tables` table will be visible to the public, even if the rest of the Datasette instance does not allow anonymous access. The root user (and any user with the new `public-tables` permission) will get a new option in the table action menu allowing them to toggle a table between public and private. Installing this plugin also causes `allow-sql` permission checks to fall back to checking if the user has access to the entire database. This is to avoid users with access to a single public table being able to access data from other tables using the `?_where=` query string parameter. ## Configuration This plugin creates a new table in one of your databases called `_public_tables`. This table defaults to being created in the first database passed to Datasette. To create it in a different named database, use this plugin configuration: ```json { ""plugins"": { ""datasette-public"": { ""database"": ""database_to_create_table_in"" } } } ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-public python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-public,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-public/,,https://pypi.org/project/datasette-public/,"{""CI"": ""https://github.com/simonw/datasette-public/actions"", ""Changelog"": ""https://github.com/simonw/datasette-public/releases"", ""Homepage"": ""https://github.com/simonw/datasette-public"", ""Issues"": ""https://github.com/simonw/datasette-public/issues""}",https://pypi.org/project/datasette-public/0.2/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.2,0, datasette-query-files,Write Datasette canned queries as plain SQL files,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-query-files [![PyPI](https://img.shields.io/pypi/v/datasette-query-files.svg)](https://pypi.org/project/datasette-query-files/) [![Changelog](https://img.shields.io/github/v/release/eyeseast/datasette-query-files?include_prereleases&label=changelog)](https://github.com/eyeseast/datasette-query-files/releases) [![Tests](https://github.com/eyeseast/datasette-query-files/workflows/Test/badge.svg)](https://github.com/eyeseast/datasette-query-files/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/eyeseast/datasette-query-files/blob/main/LICENSE) Write Datasette canned queries as plain SQL files. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-query-files Or using `pip` or `pipenv`: pip install datasette-query-files pipenv install datasette-query-files ## Usage This plugin will look for [canned queries](https://docs.datasette.io/en/stable/sql_queries.html#canned-queries) in the filesystem, in addition any defined in metadata. Let's say you're working in a directory called `project-directory`, with a database file called `my-project.db`. Start by creating a `queries` directory with a `my-project` directory inside it. Any SQL file inside that `my-project` folder will become a canned query that can be run on the `my-project` database. If you have a `query-name.sql` file and a `query-name.json` (or `query-name.yml`) file in the same directory, the JSON file will be used as query metadata. ``` project-directory/ my-project.db queries/ my-project/ query-name.sql # a query query-name.yml # query metadata ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-query-files python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Chris Amico,,text/markdown,https://github.com/eyeseast/datasette-query-files,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-query-files/,,https://pypi.org/project/datasette-query-files/,"{""CI"": ""https://github.com/eyeseast/datasette-query-files/actions"", ""Changelog"": ""https://github.com/eyeseast/datasette-query-files/releases"", ""Homepage"": ""https://github.com/eyeseast/datasette-query-files"", ""Issues"": ""https://github.com/eyeseast/datasette-query-files/issues""}",https://pypi.org/project/datasette-query-files/0.1.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.1.1,0, datasette-redirect-forbidden,Redirect forbidden requests to a login page,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-redirect-forbidden [![PyPI](https://img.shields.io/pypi/v/datasette-redirect-forbidden.svg)](https://pypi.org/project/datasette-redirect-forbidden/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-redirect-forbidden?include_prereleases&label=changelog)](https://github.com/simonw/datasette-redirect-forbidden/releases) [![Tests](https://github.com/simonw/datasette-redirect-forbidden/workflows/Test/badge.svg)](https://github.com/simonw/datasette-redirect-forbidden/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-redirect-forbidden/blob/main/LICENSE) Redirect forbidden requests to a login page ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-redirect-forbidden ## Usage Add the following to your `metadata.yml` (or `metadata.json`) file to configure the plugin: ```yaml plugins: datasette-redirect-forbidden: redirect_to: /-/login ``` Any 403 forbidden pages will redirect to the specified page. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-redirect-forbidden python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-redirect-forbidden,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-redirect-forbidden/,,https://pypi.org/project/datasette-redirect-forbidden/,"{""CI"": ""https://github.com/simonw/datasette-redirect-forbidden/actions"", ""Changelog"": ""https://github.com/simonw/datasette-redirect-forbidden/releases"", ""Homepage"": ""https://github.com/simonw/datasette-redirect-forbidden"", ""Issues"": ""https://github.com/simonw/datasette-redirect-forbidden/issues""}",https://pypi.org/project/datasette-redirect-forbidden/0.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.6,0.1,0, datasette-render-image-tags,Turn any URLs ending in .jpg/.png/.gif into img tags with width 200,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-render-image-tags [![PyPI](https://img.shields.io/pypi/v/datasette-render-image-tags.svg)](https://pypi.org/project/datasette-render-image-tags/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-render-image-tags?include_prereleases&label=changelog)](https://github.com/simonw/datasette-render-image-tags/releases) [![Tests](https://github.com/simonw/datasette-render-image-tags/workflows/Test/badge.svg)](https://github.com/simonw/datasette-render-image-tags/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-render-image-tags/blob/main/LICENSE) Turn any URLs ending in .jpg/.png/.gif into img tags with width 200 ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-render-image-tags ## Usage Once installed, any cells contaning a URL that ends with `.png` or `.jpg` or `.jpeg` or `.gif` will be rendered using an image tag, with a width of 200px. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-render-image-tags python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-render-image-tags,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-render-image-tags/,,https://pypi.org/project/datasette-render-image-tags/,"{""CI"": ""https://github.com/simonw/datasette-render-image-tags/actions"", ""Changelog"": ""https://github.com/simonw/datasette-render-image-tags/releases"", ""Homepage"": ""https://github.com/simonw/datasette-render-image-tags"", ""Issues"": ""https://github.com/simonw/datasette-render-image-tags/issues""}",https://pypi.org/project/datasette-render-image-tags/0.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.1,0, datasette-sandstorm-support,Authentication and permissions for Datasette on Sandstorm,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-sandstorm-support [![PyPI](https://img.shields.io/pypi/v/datasette-sandstorm-support.svg)](https://pypi.org/project/datasette-sandstorm-support/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-sandstorm-support?include_prereleases&label=changelog)](https://github.com/simonw/datasette-sandstorm-support/releases) [![Tests](https://github.com/simonw/datasette-sandstorm-support/workflows/Test/badge.svg)](https://github.com/simonw/datasette-sandstorm-support/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-sandstorm-support/blob/main/LICENSE) Authentication and permissions for Datasette on Sandstorm ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-sandstorm-support ## Usage This plugin is part of [datasette-sandstorm](https://github.com/ocdtrekkie/datasette-sandstorm). ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-sandstorm-support python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-sandstorm-support,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-sandstorm-support/,,https://pypi.org/project/datasette-sandstorm-support/,"{""CI"": ""https://github.com/simonw/datasette-sandstorm-support/actions"", ""Changelog"": ""https://github.com/simonw/datasette-sandstorm-support/releases"", ""Homepage"": ""https://github.com/simonw/datasette-sandstorm-support"", ""Issues"": ""https://github.com/simonw/datasette-sandstorm-support/issues""}",https://pypi.org/project/datasette-sandstorm-support/0.2/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.2,0, datasette-scale-to-zero,Quit Datasette if it has not received traffic for a specified time period,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-scale-to-zero [![PyPI](https://img.shields.io/pypi/v/datasette-scale-to-zero.svg)](https://pypi.org/project/datasette-scale-to-zero/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-scale-to-zero?include_prereleases&label=changelog)](https://github.com/simonw/datasette-scale-to-zero/releases) [![Tests](https://github.com/simonw/datasette-scale-to-zero/workflows/Test/badge.svg)](https://github.com/simonw/datasette-scale-to-zero/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-scale-to-zero/blob/main/LICENSE) Quit Datasette if it has not received traffic for a specified time period Some hosting providers such as [Fly](https://fly.io/) offer a scale to zero mechanism, where servers can shut down and will be automatically started when new traffic arrives. This plugin can be used to configure Datasette to quit X minutes (or seconds, or hours) after the last request it received. It can also cause the Datasette server to exit after a configured maximum time whether or not it is receiving traffic. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-scale-to-zero ## Configuration This plugin will only take effect if it has been configured. Add the following to your ``metadata.json`` or ``metadata.yml`` configuration file: ```json { ""plugins"": { ""datasette-scale-to-zero"": { ""duration"": ""10m"" } } } ``` This will cause Datasette to quit if it has not received any HTTP traffic for 10 minutes. You can set this value using a suffix of `m` for minutes, `h` for hours or `s` for seconds. To cause Datasette to exit if the server has been running for longer than a specific time, use `""max-age""`: ```json { ""plugins"": { ""datasette-scale-to-zero"": { ""max-age"": ""10h"" } } } ``` This example will exit the Datasette server if it has been running for more than ten hours. You can use `""duration""` and `""max-age""` together in the same configuration file: ```json { ""plugins"": { ""datasette-scale-to-zero"": { ""max-age"": ""10h"", ""duration"": ""5m"" } } } ``` This example will quit if no traffic has been received in five minutes, or if the server has been running for ten hours. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-scale-to-zero python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-scale-to-zero,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-scale-to-zero/,,https://pypi.org/project/datasette-scale-to-zero/,"{""CI"": ""https://github.com/simonw/datasette-scale-to-zero/actions"", ""Changelog"": ""https://github.com/simonw/datasette-scale-to-zero/releases"", ""Homepage"": ""https://github.com/simonw/datasette-scale-to-zero"", ""Issues"": ""https://github.com/simonw/datasette-scale-to-zero/issues""}",https://pypi.org/project/datasette-scale-to-zero/0.2/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.2,0, datasette-sentry,Datasette plugin for configuring Sentry,"[""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.7"", ""Programming Language :: Python :: 3.8""]","# datasette-sentry [![PyPI](https://img.shields.io/pypi/v/datasette-sentry.svg)](https://pypi.org/project/datasette-sentry/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-sentry?include_prereleases&label=changelog)](https://github.com/simonw/datasette-sentry/releases) [![Tests](https://github.com/simonw/datasette-sentry/workflows/Test/badge.svg)](https://github.com/simonw/datasette-sentry/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-sentry/blob/main/LICENSE) Datasette plugin for configuring Sentry for error reporting ## Installation pip install datasette-sentry ## Usage This plugin only takes effect if your `metadata.json` file contains relevant top-level plugin configuration in a `""datasette-sentry""` configuration key. You will need a Sentry DSN - see their [Getting Started instructions](https://docs.sentry.io/error-reporting/quickstart/?platform=python). Add it to `metadata.json` like this: ```json { ""plugins"": { ""datasette-sentry"": { ""dsn"": ""https://KEY@sentry.io/PROJECTID"" } } } ``` Settings in `metadata.json` are visible to anyone who visits the `/-/metadata` URL so this is a good place to take advantage of Datasette's [secret configuration values](https://datasette.readthedocs.io/en/stable/plugins.html#secret-configuration-values), in which case your configuration will look more like this: ```json { ""plugins"": { ""datasette-sentry"": { ""dsn"": { ""$env"": ""SENTRY_DSN"" } } } } ``` Then make a `SENTRY_DSN` environment variable available to Datasette. ## Configuration In addition to the `dsn` setting, you can also configure the Sentry [sample rate](https://docs.sentry.io/platforms/python/configuration/sampling/) by setting `sample_rate` to a floating point number between 0 and 1. For example, to capture 25% of errors you would do this: ```json { ""plugins"": { ""datasette-sentry"": { ""dsn"": { ""$env"": ""SENTRY_DSN"" }, ""sample_rate"": 0.25 } } } ``` ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-sentry,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-sentry/,,https://pypi.org/project/datasette-sentry/,"{""Homepage"": ""https://github.com/simonw/datasette-sentry""}",https://pypi.org/project/datasette-sentry/0.3/,"[""sentry-sdk"", ""datasette (>=0.62)"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",,0.3,0, datasette-sitemap,Generate sitemap.xml for Datasette sites,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-sitemap [![PyPI](https://img.shields.io/pypi/v/datasette-sitemap.svg)](https://pypi.org/project/datasette-sitemap/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-sitemap?include_prereleases&label=changelog)](https://github.com/simonw/datasette-sitemap/releases) [![Tests](https://github.com/simonw/datasette-sitemap/workflows/Test/badge.svg)](https://github.com/simonw/datasette-sitemap/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-sitemap/blob/main/LICENSE) Generate sitemap.xml for Datasette sites ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-sitemap ## Demo This plugin is used for the sitemap on [til.simonwillison.net](https://til.simonwillison.net/): - https://til.simonwillison.net/sitemap.xml Here's [the configuration](https://github.com/simonw/til/blob/d4f67743a90a67100b46145986b2dec6f8d96583/metadata.yaml#L14-L16) used for that sitemap. ## Usage Once configured, this plugin adds a sitemap at `/sitemap.xml` with a list of URLs. This list is defined using a SQL query in `metadata.json` (or `.yml`) that looks like this: ```json { ""plugins"": { ""datasette-sitemap"": { ""query"": ""select '/' || id as path from my_table"" } } } ``` Using `metadata.yml` allows for multi-line SQL queries which can be easier to maintain: ```yaml plugins: datasette-sitemap: query: | select '/' || id as path from my_table ``` The SQL query must return a column called `path`. The values in this column must begin with a `/`. They will be used to generate a sitemap that looks like this: ```xml https://example.com/1 https://example.com/2 ``` You can use ``UNION`` in your SQL query to combine results from multiple tables, or include literal paths that you want to include in the index: ```sql select '/data/table1/' || id as path from table1 union select '/data/table2/' || id as path from table2 union select '/about' as path ``` If your Datasette instance has multiple databases you can configure the database to query using the `database` configuration property. By default the domain name for the genearted URLs in the sitemap will be detected from the incoming request. You can set `base_url` instead to override this. This should not include a trailing slash. This example shows both of those settings, running the query against the `content` database and setting a custom base URL: ```yaml plugins: datasette-sitemap: query: | select '/plugins/' || name as path from plugins union select '/tools/' || name as path from tools union select '/news' as path database: content base_url: https://datasette.io ``` [Try that query](https://datasette.io/content?sql=select+%27%2Fplugins%2F%27+||+name+as+path+from+plugins%0D%0Aunion%0D%0Aselect+%27%2Ftools%2F%27+||+name+as+path+from+tools%0D%0Aunion%0D%0Aselect+%27%2Fnews%27+as+path%0D%0A). ## robots.txt This plugin adds a `robots.txt` file pointing to the sitemap: ``` Sitemap: http://example.com/sitemap.xml ``` You can take full control of the sitemap by installing and configuring the [datasette-block-robots](https://datasette.io/plugins/datasette-block-robots) plugin. This plugin will add the `Sitemap:` line even if you are using `datasette-block-robots` for the rest of your `robots.txt` file. ## Adding paths to the sitemap from other plugins This plugin adds a new [plugin hook](https://docs.datasette.io/en/stable/plugin_hooks.html) to Datasete called `sitemap_extra_paths()` which can be used by other plugins to add their own additional lines to the `sitemap.xml` file. The hook accepts these optional parameters: - `datasette`: The current [Datasette instance](https://docs.datasette.io/en/stable/internals.html#datasette-class). You can use this to execute SQL queries or read plugin configuration settings. - `request`: The [Request object](https://docs.datasette.io/en/stable/internals.html#request-object) representing the incoming request to `/sitemap.xml`. The hook should return a list of strings, each representing a path to be added to the sitemap. Each path must begin with a `/`. It can also return an `async def` function, which will be awaited and used to generate a list of lines. Use this option if you need to make `await` calls inside you hook implementation. This example uses the hook to add two extra paths, one of which came from a SQL query: ```python from datasette import hookimpl @hookimpl def sitemap_extra_paths(datasette): async def inner(): db = datasette.get_database() path_from_db = (await db.execute(""select '/example'"")).single_value() return [""/about"", path_from_db] return inner ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-sitemap python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-sitemap,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-sitemap/,,https://pypi.org/project/datasette-sitemap/,"{""CI"": ""https://github.com/simonw/datasette-sitemap/actions"", ""Changelog"": ""https://github.com/simonw/datasette-sitemap/releases"", ""Homepage"": ""https://github.com/simonw/datasette-sitemap"", ""Issues"": ""https://github.com/simonw/datasette-sitemap/issues""}",https://pypi.org/project/datasette-sitemap/1.0/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""datasette-block-robots ; extra == 'test'""]",>=3.7,1.0,0, datasette-socrata,Import data from Socrata into Datasette,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-socrata [![PyPI](https://img.shields.io/pypi/v/datasette-socrata.svg)](https://pypi.org/project/datasette-socrata/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-socrata?include_prereleases&label=changelog)](https://github.com/simonw/datasette-socrata/releases) [![Tests](https://github.com/simonw/datasette-socrata/workflows/Test/badge.svg)](https://github.com/simonw/datasette-socrata/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-socrata/blob/main/LICENSE) Import data from Socrata into Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-socrata ## Usage Make sure you have [enabled WAL mode](https://til.simonwillison.net/sqlite/enabling-wal-mode) on your database files before using this plugin. Once installed, an interface for importing data from Socrata will become available at this URL: /-/import-socrata Users will be able to paste in a URL to a dataset on Socrata in order to initialize an import. You can also pre-fill the form by passing a `?url=` parameter, for example: /-/import-socrata?url=https://data.sfgov.org/City-Infrastructure/Street-Tree-List/tkzw-k3nq Any database that is attached to Datasette, is NOT loaded as immutable (with the `-i` option) and that has WAL mode enabled will be available for users to import data into. The `import-socrata` permission governs access. By default the `root` actor (accessible using `datasette --root` to start Datasette) is granted that permission. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to other users. ## Configuration If you only want Socrata imports to be allowed to a specific database, you can configure that using plugin configration in `metadata.yml`: ```yaml plugins: datasette-socrata: database: socrata ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-socrata python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-socrata,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-socrata/,,https://pypi.org/project/datasette-socrata/,"{""CI"": ""https://github.com/simonw/datasette-socrata/actions"", ""Changelog"": ""https://github.com/simonw/datasette-socrata/releases"", ""Homepage"": ""https://github.com/simonw/datasette-socrata"", ""Issues"": ""https://github.com/simonw/datasette-socrata/issues""}",https://pypi.org/project/datasette-socrata/0.3/,"[""datasette"", ""sqlite-utils (>=3.27)"", ""datasette-low-disk-space-hook"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'"", ""pytest-httpx ; extra == 'test'""]",>=3.7,0.3,0, datasette-sqlite-fts4,Datasette plugin exposing SQL functions from sqlite-fts4,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-sqlite-fts4 [![PyPI](https://img.shields.io/pypi/v/datasette-sqlite-fts4.svg)](https://pypi.org/project/datasette-sqlite-fts4/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-sqlite-fts4?include_prereleases&label=changelog)](https://github.com/simonw/datasette-sqlite-fts4/releases) [![Tests](https://github.com/simonw/datasette-sqlite-fts4/workflows/Test/badge.svg)](https://github.com/simonw/datasette-sqlite-fts4/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-sqlite-fts4/blob/main/LICENSE) Datasette plugin that exposes the custom SQL functions from [sqlite-fts4](https://github.com/simonw/sqlite-fts4). [Interactive demo](https://datasette-sqlite-fts4.datasette.io/24ways-fts4?sql=select%0D%0A++++json_object%28%0D%0A++++++++""label""%2C+articles.title%2C+""href""%2C+articles.url%0D%0A++++%29+as+article%2C%0D%0A++++articles.author%2C%0D%0A++++rank_score%28matchinfo%28articles_fts%2C+""pcx""%29%29+as+score%2C%0D%0A++++rank_bm25%28matchinfo%28articles_fts%2C+""pcnalx""%29%29+as+bm25%2C%0D%0A++++json_object%28%0D%0A++++++++""pre""%2C+annotate_matchinfo%28matchinfo%28articles_fts%2C+""pcxnalyb""%29%2C+""pcxnalyb""%29%0D%0A++++%29+as+annotated_matchinfo%2C%0D%0A++++matchinfo%28articles_fts%2C+""pcxnalyb""%29+as+matchinfo%2C%0D%0A++++decode_matchinfo%28matchinfo%28articles_fts%2C+""pcxnalyb""%29%29+as+decoded_matchinfo%0D%0Afrom%0D%0A++++articles_fts+join+articles+on+articles.rowid+%3D+articles_fts.rowid%0D%0Awhere%0D%0A++++articles_fts+match+%3Asearch%0D%0Aorder+by+bm25&search=jquery+maps). Read [Exploring search relevance algorithms with SQLite](https://simonwillison.net/2019/Jan/7/exploring-search-relevance-algorithms-sqlite/) for further details on this project. ## Installation pip install datasette-sqlite-fts4 If you are deploying a database using `datasette publish` you can include this plugin using the `--install` option: datasette publish now mydb.db --install=datasette-sqlite-fts4 ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-sqlite-fts4,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-sqlite-fts4/,,https://pypi.org/project/datasette-sqlite-fts4/,"{""CI"": ""https://github.com/simonw/datasette-sqlite-fts4/actions"", ""Changelog"": ""https://github.com/simonw/datasette-sqlite-fts4/releases"", ""Homepage"": ""https://github.com/simonw/datasette-sqlite-fts4"", ""Issues"": ""https://github.com/simonw/datasette-sqlite-fts4/issues""}",https://pypi.org/project/datasette-sqlite-fts4/0.3.2/,"[""datasette"", ""sqlite-fts4 (>=1.0.3)"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.3.2,0, datasette-tiddlywiki,Run TiddlyWiki in Datasette and save Tiddlers to a SQLite database,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-tiddlywiki [![PyPI](https://img.shields.io/pypi/v/datasette-tiddlywiki.svg)](https://pypi.org/project/datasette-tiddlywiki/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-tiddlywiki?include_prereleases&label=changelog)](https://github.com/simonw/datasette-tiddlywiki/releases) [![Tests](https://github.com/simonw/datasette-tiddlywiki/workflows/Test/badge.svg)](https://github.com/simonw/datasette-tiddlywiki/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-tiddlywiki/blob/main/LICENSE) Run [TiddlyWiki](https://tiddlywiki.com/) in Datasette and save Tiddlers to a SQLite database ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-tiddlywiki ## Usage Start Datasette with a `tiddlywiki.db` database. You can create it if it does not yet exist using `--create`. You need to be signed in as the `root` user to write to the wiki, so use the `--root` option and click on the link it provides: % datasette tiddlywiki.db --create --root http://127.0.0.1:8001/-/auth-token?token=456670f1e8d01a8a33b71e17653130de17387336e29afcdfb4ab3d18261e6630 # ... Navigate to `/-/tiddlywiki` on your instance to interact with TiddlyWiki. ## Authentication and permissions By default, the wiki can be read by anyone who has permission to read the `tiddlywiki.db` database. Only the signed in `root` user can write to it. You can sign in using the `--root` option described above, or you can set a password for that user using the [datasette-auth-passwords](https://datasette.io/plugins/datasette-auth-passwords) plugin and sign in using the `/-/login` page. You can use the `edit-tiddlywiki` permission to grant edit permisions to other users, using another plugin such as [datasette-permissions-sql](https://datasette.io/plugins/datasette-permissions-sql). You can use the `view-database` permission against the `tiddlywiki` database to control who can view the wiki. Datasette's permissions mechanism is described in full in [the Datasette documentation](https://docs.datasette.io/en/stable/authentication.html). ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-tiddlywiki python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-tiddlywiki,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-tiddlywiki/,,https://pypi.org/project/datasette-tiddlywiki/,"{""CI"": ""https://github.com/simonw/datasette-tiddlywiki/actions"", ""Changelog"": ""https://github.com/simonw/datasette-tiddlywiki/releases"", ""Homepage"": ""https://github.com/simonw/datasette-tiddlywiki"", ""Issues"": ""https://github.com/simonw/datasette-tiddlywiki/issues""}",https://pypi.org/project/datasette-tiddlywiki/0.2/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.6,0.2,0, datasette-total-page-time,Add a note to the Datasette footer measuring the total page load time,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-total-page-time [![PyPI](https://img.shields.io/pypi/v/datasette-total-page-time.svg)](https://pypi.org/project/datasette-total-page-time/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-total-page-time?include_prereleases&label=changelog)](https://github.com/simonw/datasette-total-page-time/releases) [![Tests](https://github.com/simonw/datasette-total-page-time/workflows/Test/badge.svg)](https://github.com/simonw/datasette-total-page-time/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-total-page-time/blob/main/LICENSE) Add a note to the Datasette footer measuring the total page load time ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-total-page-time ## Usage Once this plugin is installed, a note will appear in the footer of every page showing how long the page took to generate. > Queries took 326.74ms · Page took 386.310ms ## How it works Measuring how long a page takes to load and then injecting that note into the page is tricky, because you need to finish generating the page before you know how long it took to load it! This plugin uses the [asgi_wrapper](https://docs.datasette.io/en/stable/plugin_hooks.html#asgi-wrapper-datasette) plugin hook to measure the time taken by Datasette and then inject the following JavaScript at the bottom of the response, after the closing `` tag but with the correct measured value: ```html ``` This script is injected only on pages with the `text/html` content type - so it should not affect JSON or CSV returned by Datasette. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-total-page-time python3 -mvenv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-total-page-time,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-total-page-time/,,https://pypi.org/project/datasette-total-page-time/,"{""CI"": ""https://github.com/simonw/datasette-total-page-time/actions"", ""Changelog"": ""https://github.com/simonw/datasette-total-page-time/releases"", ""Homepage"": ""https://github.com/simonw/datasette-total-page-time"", ""Issues"": ""https://github.com/simonw/datasette-total-page-time/issues""}",https://pypi.org/project/datasette-total-page-time/0.1/,"[""datasette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.1,0, datasette-upload-dbs,Upload SQLite database files to Datasette,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# datasette-upload-dbs [![PyPI](https://img.shields.io/pypi/v/datasette-upload-dbs.svg)](https://pypi.org/project/datasette-upload-dbs/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-upload-dbs?include_prereleases&label=changelog)](https://github.com/simonw/datasette-upload-dbs/releases) [![Tests](https://github.com/simonw/datasette-upload-dbs/workflows/Test/badge.svg)](https://github.com/simonw/datasette-upload-dbs/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-upload-dbs/blob/main/LICENSE) Upload SQLite database files to Datasette ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-upload-dbs ## Configuration This plugin requires you to configure a directory in which uploaded files will be stored. On startup, Datasette will automatically load any SQLite files that it finds in that directory. This means it is safe to restart your server in between file uploads. To configure the directory as `/home/datasette/uploads`, add this to a `metadata.yml` configuration file: ```yaml plugins: datasette-upload-dbs: directory: /home/datasette/uploads ``` Or if you are using `metadata.json`: ```json { ""plugins"": { ""datasette-upload-dbs"": { ""directory"": ""/home/datasette/uploads"" } } } ``` You can use `"".""` for the current folder when the server starts, or `""uploads""` for a folder relative to that folder. The folder will be created on startup if it does not already exist. Then start Datasette like this: datasette -m metadata.yml ## Usage Only users with the `upload-dbs` permission will be able to upload files. The `root` user has this permission by default - other users can be granted access using permission plugins, see the [Permissions](https://docs.datasette.io/en/stable/authentication.html#permissions) documentation for details. To start Datasette as the root user, run this: datasette -m metadata.yml --root And follow the link that is displayd on the console. If a user has that permission they will see an ""Upload database"" link in the navigation menu. This will take them to `/-/upload-dbs` where they will be able to upload database files, by selecting them or by dragging them onto the drop area. ![Animated demo showing a file being dropped onto a box, then uploading and redirecting to the database page](https://github.com/simonw/datasette-upload-dbs/raw/main/upload-demo.gif) ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-upload-dbs python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ",Simon Willison,,text/markdown,https://github.com/simonw/datasette-upload-dbs,,"Apache License, Version 2.0",,,https://pypi.org/project/datasette-upload-dbs/,,https://pypi.org/project/datasette-upload-dbs/,"{""CI"": ""https://github.com/simonw/datasette-upload-dbs/actions"", ""Changelog"": ""https://github.com/simonw/datasette-upload-dbs/releases"", ""Homepage"": ""https://github.com/simonw/datasette-upload-dbs"", ""Issues"": ""https://github.com/simonw/datasette-upload-dbs/issues""}",https://pypi.org/project/datasette-upload-dbs/0.1.2/,"[""datasette"", ""starlette"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.7,0.1.2,0, db-to-sqlite,CLI tool for exporting tables or queries from any SQL database to a SQLite file,"[""Development Status :: 3 - Alpha"", ""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7"", ""Topic :: Database""]","# db-to-sqlite [![PyPI](https://img.shields.io/pypi/v/db-to-sqlite.svg)](https://pypi.python.org/pypi/db-to-sqlite) [![Changelog](https://img.shields.io/github/v/release/simonw/db-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/db-to-sqlite/releases) [![Tests](https://github.com/simonw/db-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/db-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/db-to-sqlite/blob/main/LICENSE) CLI tool for exporting tables or queries from any SQL database to a SQLite file. ## Installation Install from PyPI like so: pip install db-to-sqlite If you want to use it with MySQL, you can install the extra dependency like this: pip install 'db-to-sqlite[mysql]' Installing the `mysqlclient` library on OS X can be tricky - I've found [this recipe](https://gist.github.com/simonw/90ac0afd204cd0d6d9c3135c3888d116) to work (run that before installing `db-to-sqlite`). For PostgreSQL, use this: pip install 'db-to-sqlite[postgresql]' ## Usage Usage: db-to-sqlite [OPTIONS] CONNECTION PATH Load data from any database into SQLite. PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db CONNECTION is a SQLAlchemy connection string, for example: postgresql://localhost/my_database postgresql://username:passwd@localhost/my_database mysql://root@localhost/my_database mysql://username:passwd@localhost/my_database More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls Options: --version Show the version and exit. --all Detect and copy all tables --table TEXT Specific tables to copy --skip TEXT When using --all skip these tables --redact TEXT... (table, column) pairs to redact with *** --sql TEXT Optional SQL query to run --output TEXT Table in which to save --sql query results --pk TEXT Optional column to use as a primary key --index-fks / --no-index-fks Should foreign keys have indexes? Default on -p, --progress Show progress bar --postgres-schema TEXT PostgreSQL schema to use --help Show this message and exit. For example, to save the content of the `blog_entry` table from a PostgreSQL database to a local file called `blog.db` you could do this: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --table=blog_entry You can specify `--table` more than once. You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all When running `--all` you can specify tables to skip using `--skip`: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all \ --skip=django_migrations If you want to save the results of a custom SQL query, do this: db-to-sqlite ""postgresql://localhost/myblog"" output.db \ --output=query_results \ --sql=""select id, title, created from blog_entry"" \ --pk=id The `--output` option specifies the table that should contain the results of the query. ## Using db-to-sqlite with PostgreSQL schemas If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the `--postgres-schema` option: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all \ --postgres-schema my_schema ## Using db-to-sqlite with Heroku Postgres If you run an application on [Heroku](https://www.heroku.com/) using their [Postgres database product](https://www.heroku.com/postgres), you can use the `heroku config` command to access a compatible connection string: $ heroku config --app myappname | grep HEROKU_POSTG HEROKU_POSTGRESQL_OLIVE_URL: postgres://username:password@ec2-xxx-xxx-xxx-x.compute-1.amazonaws.com:5432/dbname You can pass this to `db-to-sqlite` to create a local SQLite database with the data from your Heroku instance. You can even do this using a bash one-liner: $ db-to-sqlite $(heroku config --app myappname | grep HEROKU_POSTG | cut -d: -f 2-) \ /tmp/heroku.db --all -p 1/23: django_migrations ... 17/23: blog_blogmark [####################################] 100% ... ## Related projects * [Datasette](https://github.com/simonw/datasette): A tool for exploring and publishing data. Works great with SQLite files generated using `db-to-sqlite`. * [sqlite-utils](https://github.com/simonw/sqlite-utils): Python CLI utility and library for manipulating SQLite databases. * [csvs-to-sqlite](https://github.com/simonw/csvs-to-sqlite): Convert CSV files into a SQLite database. ## Development To set up this tool locally, first checkout the code. Then create a new virtual environment: cd db-to-sqlite python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed. You can install those extra dependencies like so: pip install -e '.[test_mysql,test_postgresql]' You can alternative use `pip install psycopg2-binary` if you cannot install the `psycopg2` dependency used by the `test_postgresql` extra. See [Running a MySQL server using Homebrew](https://til.simonwillison.net/homebrew/mysql-homebrew) for tips on running the tests against MySQL on macOS, including how to install the `mysqlclient` dependency. The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers: - `MYSQL_TEST_DB_CONNECTION` - defaults to `mysql://root@localhost/test_db_to_sqlite` - `POSTGRESQL_TEST_DB_CONNECTION` - defaults to `postgresql://localhost/test_db_to_sqlite` The database you indicate in the environment variable - `test_db_to_sqlite` by default - will be deleted and recreated on every test run. ",Simon Willison,,text/markdown,https://github.com/simonw/db-to-sqlite,,"Apache License, Version 2.0",,,https://pypi.org/project/db-to-sqlite/,,https://pypi.org/project/db-to-sqlite/,"{""CI"": ""https://travis-ci.com/simonw/db-to-sqlite"", ""Changelog"": ""https://github.com/simonw/db-to-sqlite/releases"", ""Documentation"": ""https://github.com/simonw/db-to-sqlite/blob/main/README.md"", ""Homepage"": ""https://github.com/simonw/db-to-sqlite"", ""Issues"": ""https://github.com/simonw/db-to-sqlite/issues"", ""Source code"": ""https://github.com/simonw/db-to-sqlite""}",https://pypi.org/project/db-to-sqlite/1.4/,"[""sqlalchemy"", ""sqlite-utils (>=2.9.1)"", ""click"", ""mysqlclient ; extra == 'mysql'"", ""psycopg2 ; extra == 'postgresql'"", ""pytest ; extra == 'test'"", ""pytest ; extra == 'test_mysql'"", ""mysqlclient ; extra == 'test_mysql'"", ""pytest ; extra == 'test_postgresql'"", ""psycopg2 ; extra == 'test_postgresql'""]",,1.4,0, dbf-to-sqlite,"CLCLI tool for converting DBF files (dBase, FoxPro etc) to SQLite","[""Development Status :: 3 - Alpha"", ""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7"", ""Topic :: Database""]","# dbf-to-sqlite [![PyPI](https://img.shields.io/pypi/v/dbf-to-sqlite.svg)](https://pypi.python.org/pypi/dbf-to-sqlite) [![Travis CI](https://travis-ci.com/simonw/dbf-to-sqlite.svg?branch=master)](https://travis-ci.com/simonw/dbf-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/dbf-to-sqlite/blob/master/LICENSE) CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite. $ dbf-to-sqlite --help Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB Convert DBF files (dBase, FoxPro etc) to SQLite https://github.com/simonw/dbf-to-sqlite Options: --version Show the version and exit. --table TEXT Table name to use (only valid for single files) -v, --verbose Show what's going on --help Show this message and exit. Example usage: $ dbf-to-sqlite *.DBF database.db This will create a new SQLite database called `database.db` containing one table for each of the `DBF` files in the current directory. Looking for DBF files to try this out on? Try downloading the [Himalayan Database](http://himalayandatabase.com/) of all expeditions that have climbed in the Nepal Himalaya. ",Simon Willison,,text/markdown,https://github.com/simonw/dbf-to-sqlite,,"Apache License, Version 2.0",,,https://pypi.org/project/dbf-to-sqlite/,,https://pypi.org/project/dbf-to-sqlite/,"{""Homepage"": ""https://github.com/simonw/dbf-to-sqlite""}",https://pypi.org/project/dbf-to-sqlite/0.1/,"[""dbf (==0.97.11)"", ""click"", ""sqlite-utils""]",,0.1,0, markdown-to-sqlite,CLI tool for loading markdown files into a SQLite database,"[""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7"", ""Topic :: Database""]","# markdown-to-sqlite [![PyPI](https://img.shields.io/pypi/v/markdown-to-sqlite.svg)](https://pypi.python.org/pypi/markdown-to-sqlite) [![Changelog](https://img.shields.io/github/v/release/simonw/markdown-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/markdown-to-sqlite/releases) [![Tests](https://github.com/simonw/markdown-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/markdown-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/markdown-to-sqlite/blob/main/LICENSE) CLI tool for loading markdown files into a SQLite database. YAML embedded in the markdown files will be used to populate additional columns. Usage: markdown-to-sqlite [OPTIONS] DBNAME TABLE PATHS... For example: $ markdown-to-sqlite docs.db documents file1.md file2.md ## Breaking change Prior to version 1.0 this argument order was different - markdown files were listed before the database and table. ",Simon Willison,,text/markdown,https://github.com/simonw/markdown-to-sqlite,,"Apache License, Version 2.0",,,https://pypi.org/project/markdown-to-sqlite/,,https://pypi.org/project/markdown-to-sqlite/,"{""CI"": ""https://github.com/simonw/markdown-to-sqlite/actions"", ""Changelog"": ""https://github.com/simonw/markdown-to-sqlite/releases"", ""Homepage"": ""https://github.com/simonw/markdown-to-sqlite"", ""Issues"": ""https://github.com/simonw/markdown-to-sqlite/issues""}",https://pypi.org/project/markdown-to-sqlite/1.0/,"[""yamldown"", ""markdown"", ""sqlite-utils"", ""click"", ""pytest ; extra == 'test'""]",>=3.6,1.0,0, pocket-to-sqlite,Create a SQLite database containing data from your Pocket account,"[""License :: OSI Approved :: Apache Software License""]","# pocket-to-sqlite [![PyPI](https://img.shields.io/pypi/v/pocket-to-sqlite.svg)](https://pypi.org/project/pocket-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/dogsheep/pocket-to-sqlite?include_prereleases&label=changelog)](https://github.com/dogsheep/pocket-to-sqlite/releases) [![Tests](https://github.com/dogsheep/pocket-to-sqlite/workflows/Test/badge.svg)](https://github.com/dogsheep/pocket-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/dogsheep/pocket-to-sqlite/blob/main/LICENSE) Create a SQLite database containing data from your [Pocket](https://getpocket.com/) account. ## How to install $ pip install pocket-to-sqlite ## Usage You will need to first obtain a valid OAuth token for your Pocket account. You can do this by running the `auth` command and following the prompts: $ pocket-to-sqlite auth Visit this page and sign in with your Pocket account: https://getpocket.com/auth/author... Once you have signed in there, hit to continue Authentication tokens written to auth.json Now you can fetch all of your items from Pocket like this: $ pocket-to-sqlite fetch pocket.db The first time you run this command it will fetch all of your items, and display a progress bar while it does it. On subsequent runs it will only fetch new items. You can force it to fetch everything from the beginning again using `--all`. Use `--silent` to disable the progress bar. ## Using with Datasette The SQLite database produced by this tool is designed to be browsed using [Datasette](https://datasette.readthedocs.io/). Use the [datasette-render-timestamps](https://github.com/simonw/datasette-render-timestamps) plugin to improve the display of the timestamp values. ",Simon Willison,,text/markdown,https://github.com/dogsheep/pocket-to-sqlite,,"Apache License, Version 2.0",,,https://pypi.org/project/pocket-to-sqlite/,,https://pypi.org/project/pocket-to-sqlite/,"{""CI"": ""https://github.com/dogsheep/pocket-to-sqlite/actions"", ""Changelog"": ""https://github.com/dogsheep/pocket-to-sqlite/releases"", ""Homepage"": ""https://github.com/dogsheep/pocket-to-sqlite"", ""Issues"": ""https://github.com/dogsheep/pocket-to-sqlite/issues""}",https://pypi.org/project/pocket-to-sqlite/0.2.2/,"[""sqlite-utils (>=2.4.4)"", ""click"", ""requests"", ""pytest ; extra == 'test'""]",,0.2.2,0, sqlite-colorbrewer,A custom function to use ColorBrewer scales in SQLite queries,"[""Framework :: Datasette"", ""License :: OSI Approved :: Apache Software License""]","# sqlite-colorbrewer [![PyPI](https://img.shields.io/pypi/v/sqlite-colorbrewer.svg)](https://pypi.org/project/sqlite-colorbrewer/) [![Changelog](https://img.shields.io/github/v/release/eyeseast/sqlite-colorbrewer?include_prereleases&label=changelog)](https://github.com/eyeseast/sqlite-colorbrewer/releases) [![Tests](https://github.com/eyeseast/sqlite-colorbrewer/workflows/Test/badge.svg)](https://github.com/eyeseast/sqlite-colorbrewer/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/eyeseast/sqlite-colorbrewer/blob/main/LICENSE) A custom function to use [ColorBrewer](https://colorbrewer2.org/) scales in SQLite queries. Colors are exported from [here](https://colorbrewer2.org/export/colorbrewer.json). ## Installation To install as a Python library and use with the [standard SQLite3 module](https://docs.python.org/3/library/sqlite3.html): pip install sqlite-colorbrewer To install this plugin in the same environment as Datasette. datasette install sqlite-colorbrewer ## Usage If you're using this library with Datasette, it will be automatically registered as a plugin and available for use in SQL queries, like so: ```sql SELECT colorbrewer('Blues', 9, 0); ``` That will return a single value: `""rgb(247,251,255)""` To use with a SQLite connection outside of Datasette, use the `register` function: ```python >>> import sqlite3 >>> import sqlite_colorbrewer >>> conn = sqlite3.connect(':memory') >>> sqlite_colorbrewer.register(conn) >>> cursor = conn.execute(""SELECT colorbrewer('Blues', 9, 0);"") >>> result = next(cursor) >>> print(result) rgb(247,251,255) ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd sqlite-colorbrewer python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest To build `sqlite_colorbrewer/colorbrewer.py`: ./json_to_python.py black . # to format the resulting file ## ColorBrewer Copyright (c) 2002 Cynthia Brewer, Mark Harrower, and The Pennsylvania State University. Licensed under the Apache License, Version 2.0 (the ""License""); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an ""AS IS"" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See the [ColorBrewer updates](http://www.personal.psu.edu/cab38/ColorBrewer/ColorBrewer_updates.html) for updates to copyright information. ",Chris Amico,,text/markdown,https://github.com/eyeseast/sqlite-colorbrewer,,"Apache License, Version 2.0",,,https://pypi.org/project/sqlite-colorbrewer/,,https://pypi.org/project/sqlite-colorbrewer/,"{""CI"": ""https://github.com/eyeseast/sqlite-colorbrewer/actions"", ""Changelog"": ""https://github.com/eyeseast/sqlite-colorbrewer/releases"", ""Homepage"": ""https://github.com/eyeseast/sqlite-colorbrewer"", ""Issues"": ""https://github.com/eyeseast/sqlite-colorbrewer/issues""}",https://pypi.org/project/sqlite-colorbrewer/0.2/,"[""datasette ; extra == 'test'"", ""pytest ; extra == 'test'"", ""pytest-asyncio ; extra == 'test'""]",>=3.6,0.2,0, sqlite-diffable,Tools for dumping/loading a SQLite database to diffable directory structure,"[""Development Status :: 3 - Alpha"", ""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7"", ""Topic :: Database""]","# sqlite-diffable [![PyPI](https://img.shields.io/pypi/v/sqlite-diffable.svg)](https://pypi.org/project/sqlite-diffable/) [![Changelog](https://img.shields.io/github/v/release/simonw/sqlite-diffable?include_prereleases&label=changelog)](https://github.com/simonw/sqlite-diffable/releases) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/sqlite-diffable/blob/main/LICENSE) Tools for dumping/loading a SQLite database to diffable directory structure ## Installation pip install sqlite-diffable ## Demo The repository at [simonw/simonwillisonblog-backup](https://github.com/simonw/simonwillisonblog-backup) contains a backup of the database on my blog, https://simonwillison.net/ - created using this tool. ## Dumping a database Given a SQLite database called `fixtures.db` containing a table `facetable`, the following will dump out that table to the `dump/` directory: sqlite-diffable dump fixtures.db dump/ facetable To dump out every table in that database, use `--all`: sqlite-diffable dump fixtures.db dump/ --all ## Loading a database To load a previously dumped database, run the following: sqlite-diffable load restored.db dump/ This will show an error if any of the tables that are being restored already exist in the database file. You can replace those tables (dropping them before restoring them) using the `--replace` option: sqlite-diffable load restored.db dump/ --replace ## Converting to JSON objects Table rows are stored in the `.ndjson` files as newline-delimited JSON arrays, like this: ``` [""a"", ""a"", ""a-a"", 63, null, 0.7364712141640124, ""$null""] [""a"", ""b"", ""a-b"", 51, null, 0.6020187290499803, ""$null""] ``` Sometimes it can be more convenient to work with a list of JSON objects. The `sqlite-diffable objects` command can read a `.ndjson` file and its accompanying `.metadata.json` file and output JSON objects to standard output: sqlite-diffable objects fixtures.db dump/sortable.ndjson The output of that command looks something like this: ``` {""pk1"": ""a"", ""pk2"": ""a"", ""content"": ""a-a"", ""sortable"": 63, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.7364712141640124, ""text"": ""$null""} {""pk1"": ""a"", ""pk2"": ""b"", ""content"": ""a-b"", ""sortable"": 51, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.6020187290499803, ""text"": ""$null""} ``` Add `-o` to write that output to a file: sqlite-diffable objects fixtures.db dump/sortable.ndjson -o output.txt Add `--array` to output a JSON array of objects, as opposed to a newline-delimited file: sqlite-diffable objects fixtures.db dump/sortable.ndjson --array Output: ``` [ {""pk1"": ""a"", ""pk2"": ""a"", ""content"": ""a-a"", ""sortable"": 63, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.7364712141640124, ""text"": ""$null""}, {""pk1"": ""a"", ""pk2"": ""b"", ""content"": ""a-b"", ""sortable"": 51, ""sortable_with_nulls"": null, ""sortable_with_nulls_2"": 0.6020187290499803, ""text"": ""$null""} ] ``` ## Storage format Each table is represented as two files. The first, `table_name.metadata.json`, contains metadata describing the structure of the table. For a table called `redirects_redirect` that file might look like this: ```json { ""name"": ""redirects_redirect"", ""columns"": [ ""id"", ""domain"", ""path"", ""target"", ""created"" ], ""schema"": ""CREATE TABLE [redirects_redirect] (\n [id] INTEGER PRIMARY KEY,\n [domain] TEXT,\n [path] TEXT,\n [target] TEXT,\n [created] TEXT\n)"" } ``` It is an object with three keys: `name` is the name of the table, `columns` is an array of column strings and `schema` is the SQL schema text used for tha table. The second file, `table_name.ndjson`, contains [newline-delimited JSON](http://ndjson.org/) for every row in the table. Each row is represented as a JSON array with items corresponding to each of the columns defined in the metadata. That file for the `redirects_redirect.ndjson` table might look like this: ``` [1, ""feeds.simonwillison.net"", ""swn-everything"", ""https://simonwillison.net/atom/everything/"", ""2017-10-01T21:11:36.440537+00:00""] [2, ""feeds.simonwillison.net"", ""swn-entries"", ""https://simonwillison.net/atom/entries/"", ""2017-10-01T21:12:32.478849+00:00""] [3, ""feeds.simonwillison.net"", ""swn-links"", ""https://simonwillison.net/atom/links/"", ""2017-10-01T21:12:54.820729+00:00""] ``` ",Simon Willison,,text/markdown,https://github.com/simonw/sqlite-diffable,,"Apache License, Version 2.0",,,https://pypi.org/project/sqlite-diffable/,,https://pypi.org/project/sqlite-diffable/,"{""CI"": ""https://github.com/simonw/sqlite-diffable/actions"", ""Changelog"": ""https://github.com/simonw/sqlite-diffable/releases"", ""Homepage"": ""https://github.com/simonw/sqlite-diffable"", ""Issues"": ""https://github.com/simonw/sqlite-diffable/issues""}",https://pypi.org/project/sqlite-diffable/0.5/,"[""click"", ""sqlite-utils"", ""pytest ; extra == 'test'"", ""black ; extra == 'test'""]",,0.5,0, sqlite-utils,CLI tool and Python utility functions for manipulating SQLite databases,"[""Development Status :: 5 - Production/Stable"", ""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.10"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7"", ""Programming Language :: Python :: 3.8"", ""Programming Language :: Python :: 3.9"", ""Topic :: Database""]","# sqlite-utils [![PyPI](https://img.shields.io/pypi/v/sqlite-utils.svg)](https://pypi.org/project/sqlite-utils/) [![Changelog](https://img.shields.io/github/v/release/simonw/sqlite-utils?include_prereleases&label=changelog)](https://sqlite-utils.datasette.io/en/stable/changelog.html) [![Python 3.x](https://img.shields.io/pypi/pyversions/sqlite-utils.svg?logo=python&logoColor=white)](https://pypi.org/project/sqlite-utils/) [![Tests](https://github.com/simonw/sqlite-utils/workflows/Test/badge.svg)](https://github.com/simonw/sqlite-utils/actions?query=workflow%3ATest) [![Documentation Status](https://readthedocs.org/projects/sqlite-utils/badge/?version=stable)](http://sqlite-utils.datasette.io/en/stable/?badge=stable) [![codecov](https://codecov.io/gh/simonw/sqlite-utils/branch/main/graph/badge.svg)](https://codecov.io/gh/simonw/sqlite-utils) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/sqlite-utils/blob/main/LICENSE) [![discord](https://img.shields.io/discord/823971286308356157?label=discord)](https://discord.gg/Ass7bCAMDw) Python CLI utility and library for manipulating SQLite databases. ## Some feature highlights - [Pipe JSON](https://sqlite-utils.datasette.io/en/stable/cli.html#inserting-json-data) (or [CSV or TSV](https://sqlite-utils.datasette.io/en/stable/cli.html#inserting-csv-or-tsv-data)) directly into a new SQLite database file, automatically creating a table with the appropriate schema - [Run in-memory SQL queries](https://sqlite-utils.datasette.io/en/stable/cli.html#querying-data-directly-using-an-in-memory-database), including joins, directly against data in CSV, TSV or JSON files and view the results - [Configure SQLite full-text search](https://sqlite-utils.datasette.io/en/stable/cli.html#configuring-full-text-search) against your database tables and run search queries against them, ordered by relevance - Run [transformations against your tables](https://sqlite-utils.datasette.io/en/stable/cli.html#transforming-tables) to make schema changes that SQLite `ALTER TABLE` does not directly support, such as changing the type of a column - [Extract columns](https://sqlite-utils.datasette.io/en/stable/cli.html#extracting-columns-into-a-separate-table) into separate tables to better normalize your existing data Read more on my blog, in this series of posts on [New features in sqlite-utils](https://simonwillison.net/series/sqlite-utils-features/) and other [entries tagged sqliteutils](https://simonwillison.net/tags/sqliteutils/). ## Installation pip install sqlite-utils Or if you use [Homebrew](https://brew.sh/) for macOS: brew install sqlite-utils ## Using as a CLI tool Now you can do things with the CLI utility like this: $ sqlite-utils memory dogs.csv ""select * from t"" [{""id"": 1, ""age"": 4, ""name"": ""Cleo""}, {""id"": 2, ""age"": 2, ""name"": ""Pancakes""}] $ sqlite-utils insert dogs.db dogs dogs.csv --csv [####################################] 100% $ sqlite-utils tables dogs.db --counts [{""table"": ""dogs"", ""count"": 2}] $ sqlite-utils dogs.db ""select id, name from dogs"" [{""id"": 1, ""name"": ""Cleo""}, {""id"": 2, ""name"": ""Pancakes""}] $ sqlite-utils dogs.db ""select * from dogs"" --csv id,age,name 1,4,Cleo 2,2,Pancakes $ sqlite-utils dogs.db ""select * from dogs"" --table id age name ---- ----- -------- 1 4 Cleo 2 2 Pancakes You can import JSON data into a new database table like this: $ curl https://api.github.com/repos/simonw/sqlite-utils/releases \ | sqlite-utils insert releases.db releases - --pk id Or for data in a CSV file: $ sqlite-utils insert dogs.db dogs dogs.csv --csv `sqlite-utils memory` lets you import CSV or JSON data into an in-memory database and run SQL queries against it in a single command: $ cat dogs.csv | sqlite-utils memory - ""select name, age from stdin"" See the [full CLI documentation](https://sqlite-utils.datasette.io/en/stable/cli.html) for comprehensive coverage of many more commands. ## Using as a library You can also `import sqlite_utils` and use it as a Python library like this: ```python import sqlite_utils db = sqlite_utils.Database(""demo_database.db"") # This line creates a ""dogs"" table if one does not already exist: db[""dogs""].insert_all([ {""id"": 1, ""age"": 4, ""name"": ""Cleo""}, {""id"": 2, ""age"": 2, ""name"": ""Pancakes""} ], pk=""id"") ``` Check out the [full library documentation](https://sqlite-utils.datasette.io/en/stable/python-api.html) for everything else you can do with the Python library. ## Related projects * [Datasette](https://datasette.io/): A tool for exploring and publishing data * [csvs-to-sqlite](https://github.com/simonw/csvs-to-sqlite): Convert CSV files into a SQLite database * [db-to-sqlite](https://github.com/simonw/db-to-sqlite): CLI tool for exporting a MySQL or PostgreSQL database as a SQLite file * [dogsheep](https://dogsheep.github.io/): A family of tools for personal analytics, built on top of `sqlite-utils` ",Simon Willison,,text/markdown,https://github.com/simonw/sqlite-utils,,"Apache License, Version 2.0",,,https://pypi.org/project/sqlite-utils/,,https://pypi.org/project/sqlite-utils/,"{""CI"": ""https://github.com/simonw/sqlite-utils/actions"", ""Changelog"": ""https://sqlite-utils.datasette.io/en/stable/changelog.html"", ""Documentation"": ""https://sqlite-utils.datasette.io/en/stable/"", ""Homepage"": ""https://github.com/simonw/sqlite-utils"", ""Issues"": ""https://github.com/simonw/sqlite-utils/issues"", ""Source code"": ""https://github.com/simonw/sqlite-utils""}",https://pypi.org/project/sqlite-utils/3.30/,"[""sqlite-fts4"", ""click"", ""click-default-group-wheel"", ""tabulate"", ""python-dateutil"", ""furo ; extra == 'docs'"", ""sphinx-autobuild ; extra == 'docs'"", ""codespell ; extra == 'docs'"", ""sphinx-copybutton ; extra == 'docs'"", ""beanbag-docutils (>=2.0) ; extra == 'docs'"", ""flake8 ; extra == 'flake8'"", ""mypy ; extra == 'mypy'"", ""types-click ; extra == 'mypy'"", ""types-tabulate ; extra == 'mypy'"", ""types-python-dateutil ; extra == 'mypy'"", ""data-science-types ; extra == 'mypy'"", ""pytest ; extra == 'test'"", ""black ; extra == 'test'"", ""hypothesis ; extra == 'test'"", ""cogapp ; extra == 'test'""]",>=3.6,3.30,0, yaml-to-sqlite,Utility for converting YAML files to SQLite,"[""Development Status :: 3 - Alpha"", ""Intended Audience :: Developers"", ""Intended Audience :: End Users/Desktop"", ""Intended Audience :: Science/Research"", ""License :: OSI Approved :: Apache Software License"", ""Programming Language :: Python :: 3.6"", ""Programming Language :: Python :: 3.7""]","# yaml-to-sqlite [![PyPI](https://img.shields.io/pypi/v/yaml-to-sqlite.svg)](https://pypi.org/project/yaml-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/simonw/yaml-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/yaml-to-sqlite/releases) [![Tests](https://github.com/simonw/yaml-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/yaml-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/yaml-to-sqlite/blob/main/LICENSE) Load the contents of a YAML file into a SQLite database table. ``` $ yaml-to-sqlite --help Usage: yaml-to-sqlite [OPTIONS] DB_PATH TABLE YAML_FILE Convert YAML files to SQLite Options: --version Show the version and exit. --pk TEXT Column to use as a primary key --single-column TEXT If YAML file is a list of values, populate this column --help Show this message and exit. ``` ## Usage Given a `news.yml` file containing the following: ```yaml - date: 2021-06-05 body: |- [Datasette 0.57](https://docs.datasette.io/en/stable/changelog.html#v0-57) is out with an important security patch. - date: 2021-05-10 body: |- [Django SQL Dashboard](https://simonwillison.net/2021/May/10/django-sql-dashboard/) is a new tool that brings a useful authenticated subset of Datasette to Django projects that are built on top of PostgreSQL. ``` Running this command: ```bash $ yaml-to-sqlite news.db stories news.yml ``` Will create a database file with this schema: ```bash $ sqlite-utils schema news.db CREATE TABLE [stories] ( [date] TEXT, [body] TEXT ); ``` The `--pk` option can be used to set a column as the primary key for the table: ```bash $ yaml-to-sqlite news.db stories news.yml --pk date $ sqlite-utils schema news.db CREATE TABLE [stories] ( [date] TEXT PRIMARY KEY, [body] TEXT ); ``` ## Single column YAML lists The `--single-column` option can be used when the YAML file is a list of values, for example a file called `dogs.yml` containing the following: ```yaml - Cleo - Pancakes - Nixie ``` Running this command: ```bash $ yaml-to-sqlite dogs.db dogs.yaml --single-column=name ``` Will create a single `dogs` table with a single `name` column that is the primary key: ```bash $ sqlite-utils schema dogs.db CREATE TABLE [dogs] ( [name] TEXT PRIMARY KEY ); $ sqlite-utils dogs.db 'select * from dogs' -t name -------- Cleo Pancakes Nixie ``` ",Simon Willison,,text/markdown,https://github.com/simonw/yaml-to-sqlite,,"Apache License, Version 2.0",,,https://pypi.org/project/yaml-to-sqlite/,,https://pypi.org/project/yaml-to-sqlite/,"{""Homepage"": ""https://github.com/simonw/yaml-to-sqlite""}",https://pypi.org/project/yaml-to-sqlite/1.0/,"[""click"", ""PyYAML"", ""sqlite-utils (>=3.9.1)"", ""pytest ; extra == 'test'""]",,1.0,0,