id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions 166159072,MDEwOlJlcG9zaXRvcnkxNjYxNTkwNzI=,db-to-sqlite,simonw/db-to-sqlite,0,9599,https://github.com/simonw/db-to-sqlite,CLI tool for exporting tables or queries from any SQL database to a SQLite file,0,2019-01-17T04:16:48Z,2021-06-11T22:52:12Z,2021-06-11T22:55:56Z,,77,226,226,Python,1,1,1,1,0,12,0,0,2,apache-2.0,"[""sqlalchemy"", ""sqlite"", ""datasette"", ""datasette-io"", ""datasette-tool""]",12,2,226,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,12,4,"# db-to-sqlite [![PyPI](https://img.shields.io/pypi/v/db-to-sqlite.svg)](https://pypi.python.org/pypi/db-to-sqlite) [![Changelog](https://img.shields.io/github/v/release/simonw/db-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/db-to-sqlite/releases) [![Tests](https://github.com/simonw/db-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/db-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/db-to-sqlite/blob/main/LICENSE) CLI tool for exporting tables or queries from any SQL database to a SQLite file. ## Installation Install from PyPI like so: pip install db-to-sqlite If you want to use it with MySQL, you can install the extra dependency like this: pip install 'db-to-sqlite[mysql]' Installing the `mysqlclient` library on OS X can be tricky - I've found [this recipe](https://gist.github.com/simonw/90ac0afd204cd0d6d9c3135c3888d116) to work (run that before installing `db-to-sqlite`). For PostgreSQL, use this: pip install 'db-to-sqlite[postgresql]' ## Usage Usage: db-to-sqlite [OPTIONS] CONNECTION PATH Load data from any database into SQLite. PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db CONNECTION is a SQLAlchemy connection string, for example: postgresql://localhost/my_database postgresql://username:passwd@localhost/my_database mysql://root@localhost/my_database mysql://username:passwd@localhost/my_database More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls Options: --version Show the version and exit. --all Detect and copy all tables --table TEXT Specific tables to copy --skip TEXT When using --all skip these tables --redact TEXT... (table, column) pairs to redact with *** --sql TEXT Optional SQL query to run --output TEXT Table in which to save --sql query results --pk TEXT Optional column to use as a primary key --index-fks / --no-index-fks Should foreign keys have indexes? Default on -p, --progress Show progress bar --postgres-schema TEXT PostgreSQL schema to use --help Show this message and exit. For example, to save the content of the `blog_entry` table from a PostgreSQL database to a local file called `blog.db` you could do this: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --table=blog_entry You can specify `--table` more than once. You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all When running `--all` you can specify tables to skip using `--skip`: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all \ --skip=django_migrations If you want to save the results of a custom SQL query, do this: db-to-sqlite ""postgresql://localhost/myblog"" output.db \ --output=query_results \ --sql=""select id, title, created from blog_entry"" \ --pk=id The `--output` option specifies the table that should contain the results of the query. ## Using db-to-sqlite with PostgreSQL schemas If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the `--postgres-schema` option: db-to-sqlite ""postgresql://localhost/myblog"" blog.db \ --all \ --postgres-schema my_schema ## Using db-to-sqlite with Heroku Postgres If you run an application on [Heroku](https://www.heroku.com/) using their [Postgres database product](https://www.heroku.com/postgres), you can use the `heroku config` command to access a compatible connection string: $ heroku config --app myappname | grep HEROKU_POSTG HEROKU_POSTGRESQL_OLIVE_URL: postgres://username:password@ec2-xxx-xxx-xxx-x.compute-1.amazonaws.com:5432/dbname You can pass this to `db-to-sqlite` to create a local SQLite database with the data from your Heroku instance. You can even do this using a bash one-liner: $ db-to-sqlite $(heroku config --app myappname | grep HEROKU_POSTG | cut -d: -f 2-) \ /tmp/heroku.db --all -p 1/23: django_migrations ... 17/23: blog_blogmark [####################################] 100% ... ## Related projects * [Datasette](https://github.com/simonw/datasette): A tool for exploring and publishing data. Works great with SQLite files generated using `db-to-sqlite`. * [sqlite-utils](https://github.com/simonw/sqlite-utils): Python CLI utility and library for manipulating SQLite databases. * [csvs-to-sqlite](https://github.com/simonw/csvs-to-sqlite): Convert CSV files into a SQLite database. ## Development To set up this tool locally, first checkout the code. Then create a new virtual environment: cd db-to-sqlite python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed. You can install those extra dependencies like so: pip install -e '.[test_mysql,test_postgresql]' You can alternative use `pip install psycopg2-binary` if you cannot install the `psycopg2` dependency used by the `test_postgresql` extra. See [Running a MySQL server using Homebrew](https://til.simonwillison.net/homebrew/mysql-homebrew) for tips on running the tests against MySQL on macOS, including how to install the `mysqlclient` dependency. The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers: - `MYSQL_TEST_DB_CONNECTION` - defaults to `mysql://root@localhost/test_db_to_sqlite` - `POSTGRESQL_TEST_DB_CONNECTION` - defaults to `postgresql://localhost/test_db_to_sqlite` The database you indicate in the environment variable - `test_db_to_sqlite` by default - will be deleted and recreated on every test run. ","

db-to-sqlite

CLI tool for exporting tables or queries from any SQL database to a SQLite file.

Installation

Install from PyPI like so:

pip install db-to-sqlite

If you want to use it with MySQL, you can install the extra dependency like this:

pip install 'db-to-sqlite[mysql]'

Installing the mysqlclient library on OS X can be tricky - I've found this recipe to work (run that before installing db-to-sqlite).

For PostgreSQL, use this:

pip install 'db-to-sqlite[postgresql]'

Usage

Usage: db-to-sqlite [OPTIONS] CONNECTION PATH

  Load data from any database into SQLite.

  PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db

  CONNECTION is a SQLAlchemy connection string, for example:

      postgresql://localhost/my_database
      postgresql://username:passwd@localhost/my_database

      mysql://root@localhost/my_database
      mysql://username:passwd@localhost/my_database

  More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls

Options:
  --version                     Show the version and exit.
  --all                         Detect and copy all tables
  --table TEXT                  Specific tables to copy
  --skip TEXT                   When using --all skip these tables
  --redact TEXT...              (table, column) pairs to redact with ***
  --sql TEXT                    Optional SQL query to run
  --output TEXT                 Table in which to save --sql query results
  --pk TEXT                     Optional column to use as a primary key
  --index-fks / --no-index-fks  Should foreign keys have indexes? Default on
  -p, --progress                Show progress bar
  --postgres-schema TEXT        PostgreSQL schema to use
  --help                        Show this message and exit.

For example, to save the content of the blog_entry table from a PostgreSQL database to a local file called blog.db you could do this:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --table=blog_entry

You can specify --table more than once.

You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --all

When running --all you can specify tables to skip using --skip:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --all \
    --skip=django_migrations

If you want to save the results of a custom SQL query, do this:

db-to-sqlite ""postgresql://localhost/myblog"" output.db \
    --output=query_results \
    --sql=""select id, title, created from blog_entry"" \
    --pk=id

The --output option specifies the table that should contain the results of the query.

Using db-to-sqlite with PostgreSQL schemas

If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the --postgres-schema option:

db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
    --all \
    --postgres-schema my_schema

Using db-to-sqlite with Heroku Postgres

If you run an application on Heroku using their Postgres database product, you can use the heroku config command to access a compatible connection string:

$ heroku config --app myappname | grep HEROKU_POSTG
HEROKU_POSTGRESQL_OLIVE_URL: postgres://username:password@ec2-xxx-xxx-xxx-x.compute-1.amazonaws.com:5432/dbname

You can pass this to db-to-sqlite to create a local SQLite database with the data from your Heroku instance.

You can even do this using a bash one-liner:

$ db-to-sqlite $(heroku config --app myappname | grep HEROKU_POSTG | cut -d: -f 2-) \
    /tmp/heroku.db --all -p
1/23: django_migrations
...
17/23: blog_blogmark
[####################################]  100%
...

Related projects

Development

To set up this tool locally, first checkout the code. Then create a new virtual environment:

cd db-to-sqlite
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed.

You can install those extra dependencies like so:

pip install -e '.[test_mysql,test_postgresql]'

You can alternative use pip install psycopg2-binary if you cannot install the psycopg2 dependency used by the test_postgresql extra.

See Running a MySQL server using Homebrew for tips on running the tests against MySQL on macOS, including how to install the mysqlclient dependency.

The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers:

The database you indicate in the environment variable - test_db_to_sqlite by default - will be deleted and recreated on every test run.

",,,,,, 168474970,MDEwOlJlcG9zaXRvcnkxNjg0NzQ5NzA=,dbf-to-sqlite,simonw/dbf-to-sqlite,0,9599,https://github.com/simonw/dbf-to-sqlite,"CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite",0,2019-01-31T06:30:46Z,2021-03-23T01:29:41Z,2020-02-16T00:41:20Z,,8,25,25,Python,1,1,1,1,0,8,0,0,3,apache-2.0,"[""sqlite"", ""foxpro"", ""dbf"", ""dbase"", ""datasette-io"", ""datasette-tool""]",8,3,25,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,8,2,"# dbf-to-sqlite [![PyPI](https://img.shields.io/pypi/v/dbf-to-sqlite.svg)](https://pypi.python.org/pypi/dbf-to-sqlite) [![Travis CI](https://travis-ci.com/simonw/dbf-to-sqlite.svg?branch=master)](https://travis-ci.com/simonw/dbf-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/dbf-to-sqlite/blob/master/LICENSE) CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite. ## Installation pip install dbf-to-sqlite ## Usage $ dbf-to-sqlite --help Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB Convert DBF files (dBase, FoxPro etc) to SQLite https://github.com/simonw/dbf-to-sqlite Options: --version Show the version and exit. --table TEXT Table name to use (only valid for single files) -v, --verbose Show what's going on --help Show this message and exit. Example usage: $ dbf-to-sqlite *.DBF database.db This will create a new SQLite database called `database.db` containing one table for each of the `DBF` files in the current directory. Looking for DBF files to try this out on? Try downloading the [Himalayan Database](http://himalayandatabase.com/) of all expeditions that have climbed in the Nepal Himalaya. ","

dbf-to-sqlite

CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite.

Installation

pip install dbf-to-sqlite

Usage

$ dbf-to-sqlite --help
Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB

  Convert DBF files (dBase, FoxPro etc) to SQLite

  https://github.com/simonw/dbf-to-sqlite

Options:
  --version      Show the version and exit.
  --table TEXT   Table name to use (only valid for single files)
  -v, --verbose  Show what's going on
  --help         Show this message and exit.

Example usage:

$ dbf-to-sqlite *.DBF database.db

This will create a new SQLite database called database.db containing one table for each of the DBF files in the current directory.

Looking for DBF files to try this out on? Try downloading the Himalayan Database of all expeditions that have climbed in the Nepal Himalaya.

",,,,,, 175550127,MDEwOlJlcG9zaXRvcnkxNzU1NTAxMjc=,yaml-to-sqlite,simonw/yaml-to-sqlite,0,9599,https://github.com/simonw/yaml-to-sqlite,Utility for converting YAML files to SQLite,0,2019-03-14T04:49:08Z,2021-06-13T09:04:40Z,2021-06-13T04:45:52Z,,19,36,36,Python,1,1,1,1,0,2,0,0,0,apache-2.0,"[""yaml"", ""sqlite"", ""datasette-io"", ""datasette-tool""]",2,0,36,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# yaml-to-sqlite [![PyPI](https://img.shields.io/pypi/v/yaml-to-sqlite.svg)](https://pypi.org/project/yaml-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/simonw/yaml-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/yaml-to-sqlite/releases) [![Tests](https://github.com/simonw/yaml-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/yaml-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/yaml-to-sqlite/blob/main/LICENSE) Load the contents of a YAML file into a SQLite database table. ``` $ yaml-to-sqlite --help Usage: yaml-to-sqlite [OPTIONS] DB_PATH TABLE YAML_FILE Convert YAML files to SQLite Options: --version Show the version and exit. --pk TEXT Column to use as a primary key --single-column TEXT If YAML file is a list of values, populate this column --help Show this message and exit. ``` ## Usage Given a `news.yml` file containing the following: ```yaml - date: 2021-06-05 body: |- [Datasette 0.57](https://docs.datasette.io/en/stable/changelog.html#v0-57) is out with an important security patch. - date: 2021-05-10 body: |- [Django SQL Dashboard](https://simonwillison.net/2021/May/10/django-sql-dashboard/) is a new tool that brings a useful authenticated subset of Datasette to Django projects that are built on top of PostgreSQL. ``` Running this command: ```bash $ yaml-to-sqlite news.db stories news.yml ``` Will create a database file with this schema: ```bash $ sqlite-utils schema news.db CREATE TABLE [stories] ( [date] TEXT, [body] TEXT ); ``` The `--pk` option can be used to set a column as the primary key for the table: ```bash $ yaml-to-sqlite news.db stories news.yml --pk date $ sqlite-utils schema news.db CREATE TABLE [stories] ( [date] TEXT PRIMARY KEY, [body] TEXT ); ``` ## Single column YAML lists The `--single-column` option can be used when the YAML file is a list of values, for example a file called `dogs.yml` containing the following: ```yaml - Cleo - Pancakes - Nixie ``` Running this command: ```bash $ yaml-to-sqlite dogs.db dogs.yaml --single-column=name ``` Will create a single `dogs` table with a single `name` column that is the primary key: ```bash $ sqlite-utils schema dogs.db CREATE TABLE [dogs] ( [name] TEXT PRIMARY KEY ); $ sqlite-utils dogs.db 'select * from dogs' -t name -------- Cleo Pancakes Nixie ``` ","

yaml-to-sqlite

Load the contents of a YAML file into a SQLite database table.

$ yaml-to-sqlite --help
Usage: yaml-to-sqlite [OPTIONS] DB_PATH TABLE YAML_FILE

  Convert YAML files to SQLite

Options:
  --version             Show the version and exit.
  --pk TEXT             Column to use as a primary key
  --single-column TEXT  If YAML file is a list of values, populate this column
  --help                Show this message and exit.

Usage

Given a news.yml file containing the following:

- date: 2021-06-05
  body: |-
    [Datasette 0.57](https://docs.datasette.io/en/stable/changelog.html#v0-57) is out with an important security patch.
- date: 2021-05-10
  body: |-
    [Django SQL Dashboard](https://simonwillison.net/2021/May/10/django-sql-dashboard/) is a new tool that brings a useful authenticated subset of Datasette to Django projects that are built on top of PostgreSQL.

Running this command:

$ yaml-to-sqlite news.db stories news.yml

Will create a database file with this schema:

$ sqlite-utils schema news.db
CREATE TABLE [stories] (
   [date] TEXT,
   [body] TEXT
);

The --pk option can be used to set a column as the primary key for the table:

$ yaml-to-sqlite news.db stories news.yml --pk date
$ sqlite-utils schema news.db
CREATE TABLE [stories] (
   [date] TEXT PRIMARY KEY,
   [body] TEXT
);

Single column YAML lists

The --single-column option can be used when the YAML file is a list of values, for example a file called dogs.yml containing the following:

- Cleo
- Pancakes
- Nixie

Running this command:

$ yaml-to-sqlite dogs.db dogs.yaml --single-column=name

Will create a single dogs table with a single name column that is the primary key:

$ sqlite-utils schema dogs.db
CREATE TABLE [dogs] (
   [name] TEXT PRIMARY KEY
);
$ sqlite-utils dogs.db 'select * from dogs' -t
name
--------
Cleo
Pancakes
Nixie
",,,,,, 219372133,MDEwOlJlcG9zaXRvcnkyMTkzNzIxMzM=,sqlite-transform,simonw/sqlite-transform,0,9599,https://github.com/simonw/sqlite-transform,Tool for running transformations on columns in a SQLite database,0,2019-11-03T22:07:53Z,2021-08-02T22:06:23Z,2021-08-02T22:07:57Z,,64,29,29,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""sqlite"", ""datasette-io"", ""datasette-tool""]",1,0,29,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,1,"# sqlite-transform ![No longer maintained](https://img.shields.io/badge/no%20longer-maintained-red) [![PyPI](https://img.shields.io/pypi/v/sqlite-transform.svg)](https://pypi.org/project/sqlite-transform/) [![Changelog](https://img.shields.io/github/v/release/simonw/sqlite-transform?include_prereleases&label=changelog)](https://github.com/simonw/sqlite-transform/releases) [![Tests](https://github.com/simonw/sqlite-transform/workflows/Test/badge.svg)](https://github.com/simonw/sqlite-transform/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/dogsheep/sqlite-transform/blob/main/LICENSE) Tool for running transformations on columns in a SQLite database. > **:warning: This tool is no longer maintained** > > I added a new tool to [sqlite-utils](https://sqlite-utils.datasette.io/) called [sqlite-utils convert](https://sqlite-utils.datasette.io/en/stable/cli.html#converting-data-in-columns) which provides a super-set of the functionality originally provided here. `sqlite-transform` is no longer maintained, and I recommend switching to using `sqlite-utils convert` instead. ## How to install pip install sqlite-transform ## parsedate and parsedatetime These subcommands will run all values in the specified column through `dateutils.parser.parse()` and replace them with the result, formatted as an ISO timestamp or ISO date. For example, if a row in the database has an `opened` column which contains `10/10/2019 08:10:00 PM`, running the following command: sqlite-transform parsedatetime my.db mytable opened Will result in that value being replaced by `2019-10-10T20:10:00`. Using the `parsedate` subcommand here would result in `2019-10-10` instead. In the case of ambiguous dates such as `03/04/05` these commands both default to assuming American-style `mm/dd/yy` format. You can pass `--dayfirst` to specify that the day should be assumed to be first, or `--yearfirst` for the year. ## jsonsplit The `jsonsplit` subcommand takes columns that contain a comma-separated list, for example a `tags` column containing records like `""trees,park,dogs""` and converts it into a JSON array `[""trees"", ""park"", ""dogs""]`. This is useful for taking advantage of Datasette's [Facet by JSON array](https://docs.datasette.io/en/stable/facets.html#facet-by-json-array) feature. sqlite-transform jsonsplit my.db mytable tags It defaults to splitting on commas, but you can specify a different delimiter character using the `--delimiter` option, for example: sqlite-transform jsonsplit \ my.db mytable tags --delimiter ';' Values within the array will be treated as strings, so a column containing `123,552,775` will be converted into the JSON array `[""123"", ""552"", ""775""]`. You can specify a different type for these values using `--type int` or `--type float`, for example: sqlite-transform jsonsplit \ my.db mytable tags --type int This will result in that column being converted into `[123, 552, 775]`. ## lambda for executing your own code The `lambda` subcommand lets you specify Python code which will be executed against the column. Here's how to convert a column to uppercase: sqlite-transform lambda my.db mytable mycolumn --code='str(value).upper()' The code you provide will be compiled into a function that takes `value` as a single argument. You can break your function body into multiple lines, provided the last line is a `return` statement: sqlite-transform lambda my.db mytable mycolumn --code='value = str(value) return value.upper()' You can also specify Python modules that should be imported and made available to your code using one or more `--import` options: sqlite-transform lambda my.db mytable mycolumn \ --code='""\n"".join(textwrap.wrap(value, 10))' \ --import=textwrap The `--dry-run` option will output a preview of the transformation against the first ten rows, without modifying the database. ## Saving the result to a separate column Each of these commands accepts optional `--output` and `--output-type` options. These can be used to save the result of the transformation to a separate column, which will be created if the column does not already exist. To save the result of `jsonsplit` to a new column called `json_tags`, use the following: sqlite-transform jsonsplit my.db mytable tags \ --output json_tags The type of the created column defaults to `text`, but a different column type can be specified using `--output-type`. This example will create a new floating point column called `float_id` with a copy of each item's ID increased by 0.5: sqlite-transform lambda my.db mytable id \ --code 'float(value) + 0.5' \ --output float_id \ --output-type float You can drop the original column at the end of the operation by adding `--drop`. ## Splitting a column into multiple columns Sometimes you may wish to convert a single column into multiple derived columns. For example, you may have a `location` column containing `latitude,longitude` values which you wish to split out into separate `latitude` and `longitude` columns. You can achieve this using the `--multi` option to `sqlite-transform lambda`. This option expects your `--code` function to return a Python dictionary: new columns well be created and populated for each of the keys in that dictionary. For the `latitude,longitude` example you would use the following: sqlite-transform lambda demo.db places location \ --code 'return { ""latitude"": float(value.split("","")[0]), ""longitude"": float(value.split("","")[1]), }' --multi The type of the returned values will be taken into account when creating the new columns. In this example, the resulting database schema will look like this: ```sql CREATE TABLE [places] ( [location] TEXT, [latitude] FLOAT, [longitude] FLOAT ); ``` The code function can also return `None`, in which case its output will be ignored. You can drop the original column at the end of the operation by adding `--drop`. ## Disabling the progress bar By default each command will show a progress bar. Pass `-s` or `--silent` to hide that progress bar. ","

sqlite-transform

Tool for running transformations on columns in a SQLite database.

⚠️ This tool is no longer maintained

I added a new tool to sqlite-utils called sqlite-utils convert which provides a super-set of the functionality originally provided here. sqlite-transform is no longer maintained, and I recommend switching to using sqlite-utils convert instead.

How to install

pip install sqlite-transform

parsedate and parsedatetime

These subcommands will run all values in the specified column through dateutils.parser.parse() and replace them with the result, formatted as an ISO timestamp or ISO date.

For example, if a row in the database has an opened column which contains 10/10/2019 08:10:00 PM, running the following command:

sqlite-transform parsedatetime my.db mytable opened

Will result in that value being replaced by 2019-10-10T20:10:00.

Using the parsedate subcommand here would result in 2019-10-10 instead.

In the case of ambiguous dates such as 03/04/05 these commands both default to assuming American-style mm/dd/yy format. You can pass --dayfirst to specify that the day should be assumed to be first, or --yearfirst for the year.

jsonsplit

The jsonsplit subcommand takes columns that contain a comma-separated list, for example a tags column containing records like ""trees,park,dogs"" and converts it into a JSON array [""trees"", ""park"", ""dogs""].

This is useful for taking advantage of Datasette's Facet by JSON array feature.

sqlite-transform jsonsplit my.db mytable tags

It defaults to splitting on commas, but you can specify a different delimiter character using the --delimiter option, for example:

sqlite-transform jsonsplit \
    my.db mytable tags --delimiter ';'

Values within the array will be treated as strings, so a column containing 123,552,775 will be converted into the JSON array [""123"", ""552"", ""775""].

You can specify a different type for these values using --type int or --type float, for example:

sqlite-transform jsonsplit \
    my.db mytable tags --type int

This will result in that column being converted into [123, 552, 775].

lambda for executing your own code

The lambda subcommand lets you specify Python code which will be executed against the column.

Here's how to convert a column to uppercase:

sqlite-transform lambda my.db mytable mycolumn --code='str(value).upper()'

The code you provide will be compiled into a function that takes value as a single argument. You can break your function body into multiple lines, provided the last line is a return statement:

sqlite-transform lambda my.db mytable mycolumn --code='value = str(value)
return value.upper()'

You can also specify Python modules that should be imported and made available to your code using one or more --import options:

sqlite-transform lambda my.db mytable mycolumn \
    --code='""\n"".join(textwrap.wrap(value, 10))' \
    --import=textwrap

The --dry-run option will output a preview of the transformation against the first ten rows, without modifying the database.

Saving the result to a separate column

Each of these commands accepts optional --output and --output-type options. These can be used to save the result of the transformation to a separate column, which will be created if the column does not already exist.

To save the result of jsonsplit to a new column called json_tags, use the following:

sqlite-transform jsonsplit my.db mytable tags \
  --output json_tags

The type of the created column defaults to text, but a different column type can be specified using --output-type. This example will create a new floating point column called float_id with a copy of each item's ID increased by 0.5:

sqlite-transform lambda my.db mytable id \
  --code 'float(value) + 0.5' \
  --output float_id \
  --output-type float

You can drop the original column at the end of the operation by adding --drop.

Splitting a column into multiple columns

Sometimes you may wish to convert a single column into multiple derived columns. For example, you may have a location column containing latitude,longitude values which you wish to split out into separate latitude and longitude columns.

You can achieve this using the --multi option to sqlite-transform lambda. This option expects your --code function to return a Python dictionary: new columns well be created and populated for each of the keys in that dictionary.

For the latitude,longitude example you would use the following:

sqlite-transform lambda demo.db places location \
  --code 'return {
    ""latitude"": float(value.split("","")[0]),
    ""longitude"": float(value.split("","")[1]),
  }' --multi

The type of the returned values will be taken into account when creating the new columns. In this example, the resulting database schema will look like this:

CREATE TABLE [places] (
    [location] TEXT,
    [latitude] FLOAT,
    [longitude] FLOAT
);

The code function can also return None, in which case its output will be ignored.

You can drop the original column at the end of the operation by adding --drop.

Disabling the progress bar

By default each command will show a progress bar. Pass -s or --silent to hide that progress bar.

",,,,,, 240815938,MDEwOlJlcG9zaXRvcnkyNDA4MTU5Mzg=,shapefile-to-sqlite,simonw/shapefile-to-sqlite,0,9599,https://github.com/simonw/shapefile-to-sqlite,Load shapefiles into a SQLite (optionally SpatiaLite) database,0,2020-02-16T01:55:29Z,2021-03-26T08:39:43Z,2020-08-23T06:00:41Z,,54,15,15,Python,1,1,1,1,0,0,0,0,3,apache-2.0,"[""sqlite"", ""gis"", ""spatialite"", ""shapefiles"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,3,15,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# shapefile-to-sqlite [![PyPI](https://img.shields.io/pypi/v/shapefile-to-sqlite.svg)](https://pypi.org/project/shapefile-to-sqlite/) [![CircleCI](https://circleci.com/gh/simonw/shapefile-to-sqlite.svg?style=svg)](https://circleci.com/gh/simonw/shapefile-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/shapefile-to-sqlite/blob/main/LICENSE) Load shapefiles into a SQLite (optionally SpatiaLite) database. Project background: [Things I learned about shapefiles building shapefile-to-sqlite](https://simonwillison.net/2020/Feb/19/shapefile-to-sqlite/) ## How to install $ pip install shapefile-to-sqlite ## How to use You can run this tool against a shapefile file like so: $ shapefile-to-sqlite my.db features.shp This will load the geometries as GeoJSON in a text column. ## Using with SpatiaLite If you have [SpatiaLite](https://www.gaia-gis.it/fossil/libspatialite/index) available you can load them as SpatiaLite geometries like this: $ shapefile-to-sqlite my.db features.shp --spatialite The data will be loaded into a table called `features` - based on the name of the shapefile. You can specify an alternative table name using `--table`: $ shapefile-to-sqlite my.db features.shp --table=places --spatialite The tool will search for the SpatiaLite module in the following locations: - `/usr/lib/x86_64-linux-gnu/mod_spatialite.so` - `/usr/local/lib/mod_spatialite.dylib` If you have installed the module in another location, you can use the `--spatialite_mod=xxx` option to specify where: $ shapefile-to-sqlite my.db features.shp \ --spatialite_mod=/usr/lib/mod_spatialite.dylib You can use the `--spatial-index` option to create a spatial index on the `geometry` column: $ shapefile-to-sqlite my.db features.shp --spatial-index You can omit `--spatialite` if you use either `--spatialite-mod` or `--spatial-index`. ## Projections By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the [GeoJSON specification](https://tools.ietf.org/html/rfc7946). If you want it to leave the data in whatever projection was used by the shapefile, use the `--crs=keep` option. You can convert the data to another output projection by passing it to the `--crs` option. For example, to convert to [EPSG:2227](https://epsg.io/2227) (California zone 3) use `--crs=espg:2227`. The full list of formats accepted by the `--crs` option is [documented here](https://pyproj4.github.io/pyproj/stable/api/crs.html#pyproj.crs.CRS.__init__). ## Extracting columns If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the `-c` option: $ shapefile-to-sqlite my.db features.shp -c agency This will create a `agency` table with `id` and `name` columns, and will create the `agency` column in your main table as an integer foreign key reference to that table. The `-c` option can be used multiple times. [CPAD_2020a_Units](https://calands.datasettes.com/calands/CPAD_2020a_Units) is an example of a table created using the `-c` option. ","

shapefile-to-sqlite

Load shapefiles into a SQLite (optionally SpatiaLite) database.

Project background: Things I learned about shapefiles building shapefile-to-sqlite

How to install

$ pip install shapefile-to-sqlite

How to use

You can run this tool against a shapefile file like so:

$ shapefile-to-sqlite my.db features.shp

This will load the geometries as GeoJSON in a text column.

Using with SpatiaLite

If you have SpatiaLite available you can load them as SpatiaLite geometries like this:

$ shapefile-to-sqlite my.db features.shp --spatialite

The data will be loaded into a table called features - based on the name of the shapefile. You can specify an alternative table name using --table:

$ shapefile-to-sqlite my.db features.shp --table=places --spatialite

The tool will search for the SpatiaLite module in the following locations:

If you have installed the module in another location, you can use the --spatialite_mod=xxx option to specify where:

$ shapefile-to-sqlite my.db features.shp \
    --spatialite_mod=/usr/lib/mod_spatialite.dylib

You can use the --spatial-index option to create a spatial index on the geometry column:

$ shapefile-to-sqlite my.db features.shp --spatial-index

You can omit --spatialite if you use either --spatialite-mod or --spatial-index.

Projections

By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the GeoJSON specification.

If you want it to leave the data in whatever projection was used by the shapefile, use the --crs=keep option.

You can convert the data to another output projection by passing it to the --crs option. For example, to convert to EPSG:2227 (California zone 3) use --crs=espg:2227.

The full list of formats accepted by the --crs option is documented here.

Extracting columns

If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the -c option:

$ shapefile-to-sqlite my.db features.shp -c agency

This will create a agency table with id and name columns, and will create the agency column in your main table as an integer foreign key reference to that table.

The -c option can be used multiple times.

CPAD_2020a_Units is an example of a table created using the -c option.

",,,,,, 245670670,MDEwOlJlcG9zaXRvcnkyNDU2NzA2NzA=,fec-to-sqlite,simonw/fec-to-sqlite,0,9599,https://github.com/simonw/fec-to-sqlite,Save FEC campaign finance data to a SQLite database,0,2020-03-07T16:52:49Z,2020-12-19T05:09:05Z,2020-03-07T18:21:48Z,,16,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""sqlite"", ""fec"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,1,8,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# fec-to-sqlite [![PyPI](https://img.shields.io/pypi/v/fec-to-sqlite.svg)](https://pypi.org/project/fec-to-sqlite/) [![CircleCI](https://circleci.com/gh/simonw/fec-to-sqlite.svg?style=svg)](https://circleci.com/gh/simonw/fec-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/fec-to-sqlite/blob/master/LICENSE) Create a SQLite database using FEC campaign contributions data. This tool builds on [fecfile](https://github.com/esonderegger/) by Evan Sonderegger. ## How to install $ pip install fec-to-sqlite ## Usage $ fec-to-sqlite filings filings.db 1146148 This fetches the filing with ID `1146148` and stores it in tables in a SQLite database called `filings.db`. It will create any tables it needs. You can pass more than one filing ID, separated by spaces. ","

fec-to-sqlite

Create a SQLite database using FEC campaign contributions data.

This tool builds on fecfile by Evan Sonderegger.

How to install

$ pip install fec-to-sqlite

Usage

$ fec-to-sqlite filings filings.db 1146148

This fetches the filing with ID 1146148 and stores it in tables in a SQLite database called filings.db. It will create any tables it needs.

You can pass more than one filing ID, separated by spaces.

",,,,,, 255460347,MDEwOlJlcG9zaXRvcnkyNTU0NjAzNDc=,datasette-clone,simonw/datasette-clone,0,9599,https://github.com/simonw/datasette-clone,Create a local copy of database files from a Datasette instance,0,2020-04-13T23:05:41Z,2021-06-08T15:33:21Z,2021-02-22T19:32:36Z,,20,2,2,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-tool""]",0,0,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-clone [![PyPI](https://img.shields.io/pypi/v/datasette-clone.svg)](https://pypi.org/project/datasette-clone/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-clone?include_prereleases&label=changelog)](https://github.com/simonw/datasette-clone/releases) [![Tests](https://github.com/simonw/datasette-clone/workflows/Test/badge.svg)](https://github.com/simonw/datasette-clone/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-clone/blob/main/LICENSE) Create a local copy of database files from a Datasette instance. See [datasette-clone](https://simonwillison.net/2020/Apr/14/datasette-clone/) on my blog for background on this project. ## How to install $ pip install datasette-clone ## Usage This only works against Datasette instances running immutable databases (with the `-i` option). Databases published using the `datasette publish` command should be compatible with this tool. To download copies of all `.db` files from an instance, run: datasette-clone https://latest.datasette.io You can provide an optional second argument to specify a directory: datasette-clone https://latest.datasette.io /tmp/here-please The command stores its own copy of a `databases.json` manifest and uses it to only download databases that have changed the next time you run the command. It also stores a copy of the instance's `metadata.json` to ensure you have a copy of any source and licensing information for the downloaded databases. If your instance is protected by an API token, you can use `--token` to provide it: datasette-clone https://latest.datasette.io --token=xyz For verbose output showing what the tool is doing, use `-v`. ","

datasette-clone

Create a local copy of database files from a Datasette instance.

See datasette-clone on my blog for background on this project.

How to install

$ pip install datasette-clone

Usage

This only works against Datasette instances running immutable databases (with the -i option). Databases published using the datasette publish command should be compatible with this tool.

To download copies of all .db files from an instance, run:

datasette-clone https://latest.datasette.io

You can provide an optional second argument to specify a directory:

datasette-clone https://latest.datasette.io /tmp/here-please

The command stores its own copy of a databases.json manifest and uses it to only download databases that have changed the next time you run the command.

It also stores a copy of the instance's metadata.json to ensure you have a copy of any source and licensing information for the downloaded databases.

If your instance is protected by an API token, you can use --token to provide it:

datasette-clone https://latest.datasette.io --token=xyz

For verbose output showing what the tool is doing, use -v.

",,,,,, 274264484,MDEwOlJlcG9zaXRvcnkyNzQyNjQ0ODQ=,sqlite-generate,simonw/sqlite-generate,0,9599,https://github.com/simonw/sqlite-generate,Tool for generating demo SQLite databases,0,2020-06-22T23:36:44Z,2021-02-27T15:25:26Z,2021-02-27T15:25:24Z,https://sqlite-generate-demo.datasette.io/,56,17,17,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""sqlite"", ""datasette-io"", ""datasette-tool""]",0,0,17,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# sqlite-generate [![PyPI](https://img.shields.io/pypi/v/sqlite-generate.svg)](https://pypi.org/project/sqlite-generate/) [![Changelog](https://img.shields.io/github/v/release/simonw/sqlite-generate?label=changelog)](https://github.com/simonw/sqlite-generate/releases) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/sqlite-generate/blob/master/LICENSE) Tool for generating demo SQLite databases ## Installation Install this plugin using `pip`: $ pip install sqlite-generate ## Demo You can see a demo of the database generated using this command running in [Datasette](https://github.com/simonw/datasette) at https://sqlite-generate-demo.datasette.io/ The demo is generated using the following command: sqlite-generate demo.db --seed seed --fts --columns=10 --fks=0,3 --pks=0,2 ## Usage To generate a SQLite database file called `data.db` with 10 randomly named tables in it, run the following: sqlite-generate data.db You can use the `--tables` option to generate a different number of tables: sqlite-generate data.db --tables 20 You can run the command against the same database file multiple times to keep adding new tables, using different settings for each batch of generated tables. By default each table will contain a random number of rows between 0 and 200. You can customize this with the `--rows` option: sqlite-generate data.db --rows 20 This will insert 20 rows into each table. sqlite-generate data.db --rows 500,2000 This inserts a random number of rows between 500 and 2000 into each table. Each table will have 5 columns. You can change this using `--columns`: sqlite-generate data.db --columns 10 `--columns` can also accept a range: sqlite-generate data.db --columns 5,15 You can control the random number seed used with the `--seed` option. This will result in the exact same database file being created by multiple runs of the tool: sqlite-generate data.db --seed=myseed By default each table will contain between 0 and 2 foreign key columns to other tables. You can control this using the `--fks` option, with either a single number or a range: sqlite-generate data.db --columns=20 --fks=5,15 Each table will have a single primary key column called `id`. You can use the `--pks=` option to change the number of primary key columns on each table. Drop it to 0 to generate [rowid tables](https://www.sqlite.org/rowidtable.html). Increase it above 1 to generate tables with compound primary keys. Or use a range to get a random selection of different primary key layouts: sqlite-generate data.db --pks=0,2 To configure [SQLite full-text search](https://www.sqlite.org/fts5.html) for all columns of type text, use `--fts`: sqlite-generate data.db --fts This will use FTS5 by default. To use [FTS4](https://www.sqlite.org/fts3.html) instead, use `--fts4`. ## Development To contribute to this tool, first checkout the code. Then create a new virtual environment: cd sqlite-generate python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

sqlite-generate

Tool for generating demo SQLite databases

Installation

Install this plugin using pip:

$ pip install sqlite-generate

Demo

You can see a demo of the database generated using this command running in Datasette at https://sqlite-generate-demo.datasette.io/

The demo is generated using the following command:

sqlite-generate demo.db --seed seed --fts --columns=10 --fks=0,3 --pks=0,2

Usage

To generate a SQLite database file called data.db with 10 randomly named tables in it, run the following:

sqlite-generate data.db

You can use the --tables option to generate a different number of tables:

sqlite-generate data.db --tables 20

You can run the command against the same database file multiple times to keep adding new tables, using different settings for each batch of generated tables.

By default each table will contain a random number of rows between 0 and 200. You can customize this with the --rows option:

sqlite-generate data.db --rows 20

This will insert 20 rows into each table.

sqlite-generate data.db --rows 500,2000

This inserts a random number of rows between 500 and 2000 into each table.

Each table will have 5 columns. You can change this using --columns:

sqlite-generate data.db --columns 10

--columns can also accept a range:

sqlite-generate data.db --columns 5,15

You can control the random number seed used with the --seed option. This will result in the exact same database file being created by multiple runs of the tool:

sqlite-generate data.db --seed=myseed

By default each table will contain between 0 and 2 foreign key columns to other tables. You can control this using the --fks option, with either a single number or a range:

sqlite-generate data.db --columns=20 --fks=5,15

Each table will have a single primary key column called id. You can use the --pks= option to change the number of primary key columns on each table. Drop it to 0 to generate rowid tables. Increase it above 1 to generate tables with compound primary keys. Or use a range to get a random selection of different primary key layouts:

sqlite-generate data.db --pks=0,2

To configure SQLite full-text search for all columns of type text, use --fts:

sqlite-generate data.db --fts

This will use FTS5 by default. To use FTS4 instead, use --fts4.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd sqlite-generate
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 291339086,MDEwOlJlcG9zaXRvcnkyOTEzMzkwODY=,airtable-export,simonw/airtable-export,0,9599,https://github.com/simonw/airtable-export,"Export Airtable data to YAML, JSON or SQLite files on disk",0,2020-08-29T19:51:37Z,2021-06-08T17:30:30Z,2021-04-09T23:41:52Z,https://datasette.io/tools/airtable-export,41,33,33,Python,1,1,1,1,0,5,0,0,6,apache-2.0,"[""yaml"", ""airtable"", ""airtable-api"", ""datasette-io"", ""datasette-tool""]",5,6,33,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,5,3,"# airtable-export [![PyPI](https://img.shields.io/pypi/v/airtable-export.svg)](https://pypi.org/project/airtable-export/) [![Changelog](https://img.shields.io/github/v/release/simonw/airtable-export?include_prereleases&label=changelog)](https://github.com/simonw/airtable-export/releases) [![Tests](https://github.com/simonw/airtable-export/workflows/Test/badge.svg)](https://github.com/simonw/airtable-export/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/airtable-export/blob/master/LICENSE) Export Airtable data to files on disk ## Installation Install this tool using `pip`: $ pip install airtable-export ## Usage You will need to know the following information: - Your Airtable base ID - this is a string starting with `app...` - Your Airtable API key - this is a string starting with `key...` - The names of each of the tables that you wish to export You can export all of your data to a folder called `export/` by running the following: airtable-export export base_id table1 table2 --key=key This example would create two files: `export/table1.yml` and `export/table2.yml`. Rather than passing the API key using the `--key` option you can set it as an environment variable called `AIRTABLE_KEY`. ## Export options By default the tool exports your data as YAML. You can also export as JSON or as [newline delimited JSON](http://ndjson.org/) using the `--json` or `--ndjson` options: airtable-export export base_id table1 table2 --key=key --ndjson You can pass multiple format options at once. This command will create a `.json`, `.yml` and `.ndjson` file for each exported table: airtable-export export base_id table1 table2 \ --key=key --ndjson --yaml --json ### SQLite database export You can export tables to a SQLite database file using the `--sqlite database.db` option: airtable-export export base_id table1 table2 \ --key=key --sqlite database.db This can be combined with other format options. If you only specify `--sqlite` the export directory argument will be ignored. The SQLite database will have a table created for each table you export. Those tables will have a primary key column called `airtable_id`. If you run this command against an existing SQLite database records with matching primary keys will be over-written by new records from the export. ## Request options By default the tool uses [python-httpx](https://www.python-httpx.org)'s default configurations. You can override the `user-agent` using the `--user-agent` option: airtable-export export base_id table1 table2 --key=key --user-agent ""Airtable Export Robot"" You can override the [timeout during a network read operation](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) using the `--http-read-timeout` option. If not set, this defaults to 5s. airtable-export export base_id table1 table2 --key=key --http-read-timeout 60 ## Running this using GitHub Actions [GitHub Actions](https://github.com/features/actions) is GitHub's workflow automation product. You can use it to run `airtable-export` in order to back up your Airtable data to a GitHub repository. Doing this gives you a visible commit history of changes you make to your Airtable data - like [this one](https://github.com/natbat/rockybeaches/commits/main/airtable). To run this for your own Airtable database you'll first need to add the following secrets to your GitHub repository:
AIRTABLE_BASE_ID
The base ID, a string beginning `app...`
AIRTABLE_KEY
Your Airtable API key
AIRTABLE_TABLES
A space separated list of the Airtable tables that you want to backup. If any of these contain spaces you will need to enclose them in single quotes, e.g. 'My table with spaces in the name' OtherTableWithNoSpaces
Once you have set those secrets, add the following as a file called `.github/workflows/backup-airtable.yml`: ```yaml name: Backup Airtable on: workflow_dispatch: schedule: - cron: '32 0 * * *' jobs: build: runs-on: ubuntu-latest steps: - name: Check out repo uses: actions/checkout@v2 - name: Set up Python uses: actions/setup-python@v2 with: python-version: 3.8 - uses: actions/cache@v2 name: Configure pip caching with: path: ~/.cache/pip key: ${{ runner.os }}-pip- restore-keys: | ${{ runner.os }}-pip- - name: Install airtable-export run: | pip install airtable-export - name: Backup Airtable to backups/ env: AIRTABLE_BASE_ID: ${{ secrets.AIRTABLE_BASE_ID }} AIRTABLE_KEY: ${{ secrets.AIRTABLE_KEY }} AIRTABLE_TABLES: ${{ secrets.AIRTABLE_TABLES }} run: |- airtable-export backups $AIRTABLE_BASE_ID $AIRTABLE_TABLES -v - name: Commit and push if it changed run: |- git config user.name ""Automated"" git config user.email ""actions@users.noreply.github.com"" git add -A timestamp=$(date -u) git commit -m ""Latest data: ${timestamp}"" || exit 0 git push ``` This will run once a day (at 32 minutes past midnight UTC) and will also run if you manually click the ""Run workflow"" button, see [GitHub Actions: Manual triggers with workflow_dispatch](https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/). ## Development To contribute to this tool, first checkout the code. Then create a new virtual environment: cd airtable-export python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

airtable-export

Export Airtable data to files on disk

Installation

Install this tool using pip:

$ pip install airtable-export

Usage

You will need to know the following information:

You can export all of your data to a folder called export/ by running the following:

airtable-export export base_id table1 table2 --key=key

This example would create two files: export/table1.yml and export/table2.yml.

Rather than passing the API key using the --key option you can set it as an environment variable called AIRTABLE_KEY.

Export options

By default the tool exports your data as YAML.

You can also export as JSON or as newline delimited JSON using the --json or --ndjson options:

airtable-export export base_id table1 table2 --key=key --ndjson

You can pass multiple format options at once. This command will create a .json, .yml and .ndjson file for each exported table:

airtable-export export base_id table1 table2 \
    --key=key --ndjson --yaml --json

SQLite database export

You can export tables to a SQLite database file using the --sqlite database.db option:

airtable-export export base_id table1 table2 \
    --key=key --sqlite database.db

This can be combined with other format options. If you only specify --sqlite the export directory argument will be ignored.

The SQLite database will have a table created for each table you export. Those tables will have a primary key column called airtable_id.

If you run this command against an existing SQLite database records with matching primary keys will be over-written by new records from the export.

Request options

By default the tool uses python-httpx's default configurations.

You can override the user-agent using the --user-agent option:

airtable-export export base_id table1 table2 --key=key --user-agent ""Airtable Export Robot""

You can override the timeout during a network read operation using the --http-read-timeout option. If not set, this defaults to 5s.

airtable-export export base_id table1 table2 --key=key --http-read-timeout 60

Running this using GitHub Actions

GitHub Actions is GitHub's workflow automation product. You can use it to run airtable-export in order to back up your Airtable data to a GitHub repository. Doing this gives you a visible commit history of changes you make to your Airtable data - like this one.

To run this for your own Airtable database you'll first need to add the following secrets to your GitHub repository:

AIRTABLE_BASE_ID
The base ID, a string beginning `app...`
AIRTABLE_KEY
Your Airtable API key
AIRTABLE_TABLES
A space separated list of the Airtable tables that you want to backup. If any of these contain spaces you will need to enclose them in single quotes, e.g. 'My table with spaces in the name' OtherTableWithNoSpaces

Once you have set those secrets, add the following as a file called .github/workflows/backup-airtable.yml:

name: Backup Airtable

on:
  workflow_dispatch:
  schedule:
  - cron: '32 0 * * *'

jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    - name: Check out repo
      uses: actions/checkout@v2
    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: 3.8
    - uses: actions/cache@v2
      name: Configure pip caching
      with:
        path: ~/.cache/pip
        key: ${{ runner.os }}-pip-
        restore-keys: |
          ${{ runner.os }}-pip-
    - name: Install airtable-export
      run: |
        pip install airtable-export
    - name: Backup Airtable to backups/
      env:
        AIRTABLE_BASE_ID: ${{ secrets.AIRTABLE_BASE_ID }}
        AIRTABLE_KEY: ${{ secrets.AIRTABLE_KEY }}
        AIRTABLE_TABLES: ${{ secrets.AIRTABLE_TABLES }}
      run: |-
        airtable-export backups $AIRTABLE_BASE_ID $AIRTABLE_TABLES -v
    - name: Commit and push if it changed
      run: |-
        git config user.name ""Automated""
        git config user.email ""actions@users.noreply.github.com""
        git add -A
        timestamp=$(date -u)
        git commit -m ""Latest data: ${timestamp}"" || exit 0
        git push

This will run once a day (at 32 minutes past midnight UTC) and will also run if you manually click the ""Run workflow"" button, see GitHub Actions: Manual triggers with workflow_dispatch.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd airtable-export
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 305199661,MDEwOlJlcG9zaXRvcnkzMDUxOTk2NjE=,sphinx-to-sqlite,simonw/sphinx-to-sqlite,0,9599,https://github.com/simonw/sphinx-to-sqlite,Create a SQLite database from Sphinx documentation,0,2020-10-18T21:26:55Z,2020-12-19T05:08:12Z,2020-10-22T04:55:45Z,,9,2,2,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""sqlite"", ""sphinx"", ""datasette-io"", ""datasette-tool""]",0,2,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# sphinx-to-sqlite [![PyPI](https://img.shields.io/pypi/v/sphinx-to-sqlite.svg)](https://pypi.org/project/sphinx-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/simonw/sphinx-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/sphinx-to-sqlite/releases) [![Tests](https://github.com/simonw/sphinx-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/sphinx-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/sphinx-to-sqlite/blob/master/LICENSE) Create a SQLite database from Sphinx documentation. ## Demo You can see the results of running this tool against the [Datasette documentation](https://docs.datasette.io/) at https://latest-docs.datasette.io/docs/sections ## Installation Install this tool using `pip`: $ pip install sphinx-to-sqlite ## Usage First run `sphinx-build` with the `-b xml` option to create XML files in your `_build/` directory. Then run: $ sphinx-to-sqlite docs.db path/to/_build To build the SQLite database. ## Development To contribute to this tool, first checkout the code. Then create a new virtual environment: cd sphinx-to-sqlite python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

sphinx-to-sqlite

Create a SQLite database from Sphinx documentation.

Demo

You can see the results of running this tool against the Datasette documentation at https://latest-docs.datasette.io/docs/sections

Installation

Install this tool using pip:

$ pip install sphinx-to-sqlite

Usage

First run sphinx-build with the -b xml option to create XML files in your _build/ directory.

Then run:

$ sphinx-to-sqlite docs.db path/to/_build

To build the SQLite database.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd sphinx-to-sqlite
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 335372050,MDEwOlJlcG9zaXRvcnkzMzUzNzIwNTA=,download-tiles,simonw/download-tiles,0,9599,https://github.com/simonw/download-tiles,Download map tiles and store them in an MBTiles database,0,2021-02-02T17:37:49Z,2021-05-29T07:22:58Z,2021-02-16T04:19:59Z,https://datasette.io/tools/download-tiles,26,9,9,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""openstreetmap"", ""mbtiles"", ""datasette-io"", ""datasette-tool""]",0,0,9,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# download-tiles [![PyPI](https://img.shields.io/pypi/v/download-tiles.svg)](https://pypi.org/project/download-tiles/) [![Changelog](https://img.shields.io/github/v/release/simonw/download-tiles?include_prereleases&label=changelog)](https://github.com/simonw/download-tiles/releases) [![Tests](https://github.com/simonw/download-tiles/workflows/Test/badge.svg)](https://github.com/simonw/download-tiles/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/download-tiles/blob/master/LICENSE) Download map tiles and store them in an MBTiles database ## Installation Install this tool using `pip`: $ pip install download-tiles ## Usage This tool downloads tiles from a specified [TMS (Tile Map Server)](https://wiki.openstreetmap.org/wiki/TMS) server for a specified bounding box and range of zoom levels and stores those tiles in a MBTiles SQLite database. It is a command-line wrapper around the [Landez](https://github.com/makinacorpus/landez) Python libary. **Please use this tool responsibly**. Consult the usage policies of the tile servers you are interacting with, for example the [OpenStreetMap Tile Usage Policy](https://operations.osmfoundation.org/policies/tiles/). Running the following will download zoom levels 0-3 of OpenStreetMap, 85 tiles total, and store them in a SQLite database called `world.mbtiles`: download-tiles world.mbtiles You can customize which tile and zoom levels are downloaded using command options: `--zoom-levels=0-3` or `-z=0-3` The different zoom levels to download. Specify a single number, e.g. `15`, or a range of numbers e.g. `0-4`. Be careful with this setting as you can easily go over the limits requested by the underlying tile server. `--bbox=3.9,-6.3,14.5,10.2` or `-b=3.9,-6.3,14.5,10.2` The bounding box to fetch. Should be specified as `min-lon,min-lat,max-lon,max-lat`. You can use [bboxfinder.com](http://bboxfinder.com/) to find these for different areas. `--city=london` or `--country=madagascar` These options can be used instead of `--bbox`. The city or country specified will be looked up using the [Nominatum API](https://nominatim.org/release-docs/latest/api/Search/) and used to derive a bounding box. `--show-bbox` Use this option to output the bounding box that was retrieved for the `--city` or `--country` without downloading any tiles. `--name=Name` A name for this tile collection, used for the `name` field in the `metadata` table. If not specified a UUID will be used, or if you used `--city` or `--country` the name will be set to the full name of that place. `--attribution=""Attribution string""` Attribution string to bake into the `metadata` table. This will default to `© OpenStreetMap contributors` unless you use `--tiles-url` to specify an alternative tile server, in which case you should specify a custom attribution string. You can use the `--attribution=osm` shortcut to specify the `© OpenStreetMap contributors` value without having to type it out in full. `--tiles-url=https://...` The tile server URL to use. This should include `{z}` and `{x}` and `{y}` specifiers, and can optionally include `{s}` for subdomains. The default URL used here is for OpenStreetMap, `http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png` `--tiles-subdomains=a,b,c` A comma-separated list of subdomains to use for the `{s}` parameter. `--verbose` Use this option to turn on verbose logging. `--cache-dir=/tmp/tiles` Provide a directory to cache downloaded tiles between runs. This can be useful if you are worried you might not have used the correct options for the bounding box or zoom levels. ## Development To contribute to this tool, first checkout the code. Then create a new virtual environment: cd download-tiles python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

download-tiles

Download map tiles and store them in an MBTiles database

Installation

Install this tool using pip:

$ pip install download-tiles

Usage

This tool downloads tiles from a specified TMS (Tile Map Server) server for a specified bounding box and range of zoom levels and stores those tiles in a MBTiles SQLite database. It is a command-line wrapper around the Landez Python libary.

Please use this tool responsibly. Consult the usage policies of the tile servers you are interacting with, for example the OpenStreetMap Tile Usage Policy.

Running the following will download zoom levels 0-3 of OpenStreetMap, 85 tiles total, and store them in a SQLite database called world.mbtiles:

download-tiles world.mbtiles

You can customize which tile and zoom levels are downloaded using command options:

--zoom-levels=0-3 or -z=0-3

The different zoom levels to download. Specify a single number, e.g. 15, or a range of numbers e.g. 0-4. Be careful with this setting as you can easily go over the limits requested by the underlying tile server.

--bbox=3.9,-6.3,14.5,10.2 or -b=3.9,-6.3,14.5,10.2

The bounding box to fetch. Should be specified as min-lon,min-lat,max-lon,max-lat. You can use bboxfinder.com to find these for different areas.

--city=london or --country=madagascar

These options can be used instead of --bbox. The city or country specified will be looked up using the Nominatum API and used to derive a bounding box.

--show-bbox

Use this option to output the bounding box that was retrieved for the --city or --country without downloading any tiles.

--name=Name

A name for this tile collection, used for the name field in the metadata table. If not specified a UUID will be used, or if you used --city or --country the name will be set to the full name of that place.

--attribution=""Attribution string""

Attribution string to bake into the metadata table. This will default to © OpenStreetMap contributors unless you use --tiles-url to specify an alternative tile server, in which case you should specify a custom attribution string.

You can use the --attribution=osm shortcut to specify the © OpenStreetMap contributors value without having to type it out in full.

--tiles-url=https://...

The tile server URL to use. This should include {z} and {x} and {y} specifiers, and can optionally include {s} for subdomains.

The default URL used here is for OpenStreetMap, http://{s}.tile.openstreetmap.org/{z}/{x}/{y}.png

--tiles-subdomains=a,b,c

A comma-separated list of subdomains to use for the {s} parameter.

--verbose

Use this option to turn on verbose logging.

--cache-dir=/tmp/tiles

Provide a directory to cache downloaded tiles between runs. This can be useful if you are worried you might not have used the correct options for the bounding box or zoom levels.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd download-tiles
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,, 346597557,MDEwOlJlcG9zaXRvcnkzNDY1OTc1NTc=,tableau-to-sqlite,simonw/tableau-to-sqlite,0,9599,https://github.com/simonw/tableau-to-sqlite,Fetch data from Tableau into a SQLite database,0,2021-03-11T06:12:02Z,2021-06-10T04:40:44Z,2021-04-29T16:11:03Z,,212,8,8,Python,1,1,1,1,0,2,0,0,2,apache-2.0,"[""datasette-io"", ""datasette-tool""]",2,2,8,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# tableau-to-sqlite [![PyPI](https://img.shields.io/pypi/v/tableau-to-sqlite.svg)](https://pypi.org/project/tableau-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/simonw/tableau-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/tableau-to-sqlite/releases) [![Tests](https://github.com/simonw/tableau-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/tableau-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/tableau-to-sqlite/blob/master/LICENSE) Fetch data from Tableau into a SQLite database. A wrapper around [TableauScraper](https://github.com/bertrandmartel/tableau-scraping/). ## Installation Install this tool using `pip`: $ pip install tableau-to-sqlite ## Usage If you have the URL to a Tableau dashboard like this: https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations You can pass that directly to the tool: tableau-to-sqlite tableau.db \ https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations This will create a SQLite database called `tableau.db` containing one table for each of the worksheets in that dashboard. If the dashboard is hosted on https://public.tableau.com/ you can instead provide the view name. This will be two strings separated by a `/` symbol - something like this: OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment Now run the tool like this: tableau-to-sqlite tableau.db \ OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment ## Get the data as JSON or CSV If you're building a [git scraper](https://simonwillison.net/2020/Oct/9/git-scraping/) you may want to convert the data gathered by this tool to CSV or JSON to check into your repository. You can do that using [sqlite-utils](https://sqlite-utils.datasette.io/). Install it using `pip`: pip install sqlite-utils You can dump out a table as JSON like so: sqlite-utils rows tableau.db \ 'Admin Site and County Map Site No Info' > tableau.json Or as CSV like this: sqlite-utils rows tableau.db --csv \ 'Admin Site and County Map Site No Info' > tableau.csv ## Development To contribute to this tool, first checkout the code. Then create a new virtual environment: cd tableau-to-sqlite python -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

tableau-to-sqlite

Fetch data from Tableau into a SQLite database. A wrapper around TableauScraper.

Installation

Install this tool using pip:

$ pip install tableau-to-sqlite

Usage

If you have the URL to a Tableau dashboard like this:

https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations

You can pass that directly to the tool:

tableau-to-sqlite tableau.db \
  https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations

This will create a SQLite database called tableau.db containing one table for each of the worksheets in that dashboard.

If the dashboard is hosted on https://public.tableau.com/ you can instead provide the view name. This will be two strings separated by a / symbol - something like this:

OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment

Now run the tool like this:

tableau-to-sqlite tableau.db \
    OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment

Get the data as JSON or CSV

If you're building a git scraper you may want to convert the data gathered by this tool to CSV or JSON to check into your repository.

You can do that using sqlite-utils. Install it using pip:

pip install sqlite-utils

You can dump out a table as JSON like so:

sqlite-utils rows tableau.db \
   'Admin Site and County Map Site No Info' > tableau.json

Or as CSV like this:

sqlite-utils rows tableau.db --csv \
   'Admin Site and County Map Site No Info' > tableau.csv

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd tableau-to-sqlite
python -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,,