id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions
166159072,MDEwOlJlcG9zaXRvcnkxNjYxNTkwNzI=,db-to-sqlite,simonw/db-to-sqlite,0,9599,https://github.com/simonw/db-to-sqlite,CLI tool for exporting tables or queries from any SQL database to a SQLite file,0,2019-01-17T04:16:48Z,2021-06-11T22:52:12Z,2021-06-11T22:55:56Z,,77,226,226,Python,1,1,1,1,0,12,0,0,2,apache-2.0,"[""sqlalchemy"", ""sqlite"", ""datasette"", ""datasette-io"", ""datasette-tool""]",12,2,226,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,12,4,"# db-to-sqlite
[](https://pypi.python.org/pypi/db-to-sqlite)
[](https://github.com/simonw/db-to-sqlite/releases)
[](https://github.com/simonw/db-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/db-to-sqlite/blob/main/LICENSE)
CLI tool for exporting tables or queries from any SQL database to a SQLite file.
## Installation
Install from PyPI like so:
pip install db-to-sqlite
If you want to use it with MySQL, you can install the extra dependency like this:
pip install 'db-to-sqlite[mysql]'
Installing the `mysqlclient` library on OS X can be tricky - I've found [this recipe](https://gist.github.com/simonw/90ac0afd204cd0d6d9c3135c3888d116) to work (run that before installing `db-to-sqlite`).
For PostgreSQL, use this:
pip install 'db-to-sqlite[postgresql]'
## Usage
Usage: db-to-sqlite [OPTIONS] CONNECTION PATH
Load data from any database into SQLite.
PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db
CONNECTION is a SQLAlchemy connection string, for example:
postgresql://localhost/my_database
postgresql://username:passwd@localhost/my_database
mysql://root@localhost/my_database
mysql://username:passwd@localhost/my_database
More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls
Options:
--version Show the version and exit.
--all Detect and copy all tables
--table TEXT Specific tables to copy
--skip TEXT When using --all skip these tables
--redact TEXT... (table, column) pairs to redact with ***
--sql TEXT Optional SQL query to run
--output TEXT Table in which to save --sql query results
--pk TEXT Optional column to use as a primary key
--index-fks / --no-index-fks Should foreign keys have indexes? Default on
-p, --progress Show progress bar
--postgres-schema TEXT PostgreSQL schema to use
--help Show this message and exit.
For example, to save the content of the `blog_entry` table from a PostgreSQL database to a local file called `blog.db` you could do this:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--table=blog_entry
You can specify `--table` more than once.
You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--all
When running `--all` you can specify tables to skip using `--skip`:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--all \
--skip=django_migrations
If you want to save the results of a custom SQL query, do this:
db-to-sqlite ""postgresql://localhost/myblog"" output.db \
--output=query_results \
--sql=""select id, title, created from blog_entry"" \
--pk=id
The `--output` option specifies the table that should contain the results of the query.
## Using db-to-sqlite with PostgreSQL schemas
If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the `--postgres-schema` option:
db-to-sqlite ""postgresql://localhost/myblog"" blog.db \
--all \
--postgres-schema my_schema
## Using db-to-sqlite with Heroku Postgres
If you run an application on [Heroku](https://www.heroku.com/) using their [Postgres database product](https://www.heroku.com/postgres), you can use the `heroku config` command to access a compatible connection string:
$ heroku config --app myappname | grep HEROKU_POSTG
HEROKU_POSTGRESQL_OLIVE_URL: postgres://username:password@ec2-xxx-xxx-xxx-x.compute-1.amazonaws.com:5432/dbname
You can pass this to `db-to-sqlite` to create a local SQLite database with the data from your Heroku instance.
You can even do this using a bash one-liner:
$ db-to-sqlite $(heroku config --app myappname | grep HEROKU_POSTG | cut -d: -f 2-) \
/tmp/heroku.db --all -p
1/23: django_migrations
...
17/23: blog_blogmark
[####################################] 100%
...
## Related projects
* [Datasette](https://github.com/simonw/datasette): A tool for exploring and publishing data. Works great with SQLite files generated using `db-to-sqlite`.
* [sqlite-utils](https://github.com/simonw/sqlite-utils): Python CLI utility and library for manipulating SQLite databases.
* [csvs-to-sqlite](https://github.com/simonw/csvs-to-sqlite): Convert CSV files into a SQLite database.
## Development
To set up this tool locally, first checkout the code. Then create a new virtual environment:
cd db-to-sqlite
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed.
You can install those extra dependencies like so:
pip install -e '.[test_mysql,test_postgresql]'
You can alternative use `pip install psycopg2-binary` if you cannot install the `psycopg2` dependency used by the `test_postgresql` extra.
See [Running a MySQL server using Homebrew](https://til.simonwillison.net/homebrew/mysql-homebrew) for tips on running the tests against MySQL on macOS, including how to install the `mysqlclient` dependency.
The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers:
- `MYSQL_TEST_DB_CONNECTION` - defaults to `mysql://root@localhost/test_db_to_sqlite`
- `POSTGRESQL_TEST_DB_CONNECTION` - defaults to `postgresql://localhost/test_db_to_sqlite`
The database you indicate in the environment variable - `test_db_to_sqlite` by default - will be deleted and recreated on every test run.
","
db-to-sqlite
CLI tool for exporting tables or queries from any SQL database to a SQLite file.
Installation
Install from PyPI like so:
pip install db-to-sqlite
If you want to use it with MySQL, you can install the extra dependency like this:
pip install 'db-to-sqlite[mysql]'
Installing the mysqlclient library on OS X can be tricky - I've found this recipe to work (run that before installing db-to-sqlite).
For PostgreSQL, use this:
pip install 'db-to-sqlite[postgresql]'
Usage
Usage: db-to-sqlite [OPTIONS] CONNECTION PATH
Load data from any database into SQLite.
PATH is a path to the SQLite file to create, e.c. /tmp/my_database.db
CONNECTION is a SQLAlchemy connection string, for example:
postgresql://localhost/my_database
postgresql://username:passwd@localhost/my_database
mysql://root@localhost/my_database
mysql://username:passwd@localhost/my_database
More: https://docs.sqlalchemy.org/en/13/core/engines.html#database-urls
Options:
--version Show the version and exit.
--all Detect and copy all tables
--table TEXT Specific tables to copy
--skip TEXT When using --all skip these tables
--redact TEXT... (table, column) pairs to redact with ***
--sql TEXT Optional SQL query to run
--output TEXT Table in which to save --sql query results
--pk TEXT Optional column to use as a primary key
--index-fks / --no-index-fks Should foreign keys have indexes? Default on
-p, --progress Show progress bar
--postgres-schema TEXT PostgreSQL schema to use
--help Show this message and exit.
For example, to save the content of the blog_entry table from a PostgreSQL database to a local file called blog.db you could do this:
You can also save the data from all of your tables, effectively creating a SQLite copy of your entire database. Any foreign key relationships will be detected and added to the SQLite database. For example:
If you want to save the results of a custom SQL query, do this:
db-to-sqlite ""postgresql://localhost/myblog"" output.db \
--output=query_results \
--sql=""select id, title, created from blog_entry"" \
--pk=id
The --output option specifies the table that should contain the results of the query.
Using db-to-sqlite with PostgreSQL schemas
If the tables you want to copy from your PostgreSQL database aren't in the default schema, you can specify an alternate one with the --postgres-schema option:
If you run an application on Heroku using their Postgres database product, you can use the heroku config command to access a compatible connection string:
Datasette: A tool for exploring and publishing data. Works great with SQLite files generated using db-to-sqlite.
sqlite-utils: Python CLI utility and library for manipulating SQLite databases.
csvs-to-sqlite: Convert CSV files into a SQLite database.
Development
To set up this tool locally, first checkout the code. Then create a new virtual environment:
cd db-to-sqlite
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and test dependencies:
pip install -e '.[test]'
To run the tests:
pytest
This will skip tests against MySQL or PostgreSQL if you do not have their additional dependencies installed.
You can install those extra dependencies like so:
pip install -e '.[test_mysql,test_postgresql]'
You can alternative use pip install psycopg2-binary if you cannot install the psycopg2 dependency used by the test_postgresql extra.
See Running a MySQL server using Homebrew for tips on running the tests against MySQL on macOS, including how to install the mysqlclient dependency.
The PostgreSQL and MySQL tests default to expecting to run against servers on localhost. You can use environment variables to point them at different test database servers:
MYSQL_TEST_DB_CONNECTION - defaults to mysql://root@localhost/test_db_to_sqlite
POSTGRESQL_TEST_DB_CONNECTION - defaults to postgresql://localhost/test_db_to_sqlite
The database you indicate in the environment variable - test_db_to_sqlite by default - will be deleted and recreated on every test run.
",,,,,,
243887036,MDEwOlJlcG9zaXRvcnkyNDM4ODcwMzY=,datasette-configure-fts,simonw/datasette-configure-fts,0,9599,https://github.com/simonw/datasette-configure-fts,Datasette plugin for enabling full-text search against selected table columns,0,2020-02-29T01:50:57Z,2020-11-01T02:59:12Z,2020-11-01T02:59:10Z,,42,2,2,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,2,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-configure-fts
[](https://pypi.org/project/datasette-configure-fts/)
[](https://github.com/simonw/datasette-configure-fts/releases)
[](https://github.com/simonw/datasette-configure-fts/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-configure-fts/blob/main/LICENSE)
Datasette plugin for enabling full-text search against selected table columns
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-configure-fts
## Usage
Having installed the plugin, visit `/-/configure-fts` on your Datasette instance to configure FTS for tables on attached writable databases.
Any time you have permission to configure FTS for a table a menu item will appear in the table actions menu on the table page.
By default only [the root actor](https://datasette.readthedocs.io/en/stable/authentication.html#using-the-root-actor) can access the page - so you'll need to run Datasette with the `--root` option and click on the link shown in the terminal to sign in and access the page.
The `configure-fts` permission governs access. You can use permission plugins such as [datasette-permissions-sql](https://github.com/simonw/datasette-permissions-sql) to grant additional access to the write interface.
","
datasette-configure-fts
Datasette plugin for enabling full-text search against selected table columns
Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-configure-fts
Usage
Having installed the plugin, visit /-/configure-fts on your Datasette instance to configure FTS for tables on attached writable databases.
Any time you have permission to configure FTS for a table a menu item will appear in the table actions menu on the table page.
By default only the root actor can access the page - so you'll need to run Datasette with the --root option and click on the link shown in the terminal to sign in and access the page.
The configure-fts permission governs access. You can use permission plugins such as datasette-permissions-sql to grant additional access to the write interface.
",,,,,,
305199661,MDEwOlJlcG9zaXRvcnkzMDUxOTk2NjE=,sphinx-to-sqlite,simonw/sphinx-to-sqlite,0,9599,https://github.com/simonw/sphinx-to-sqlite,Create a SQLite database from Sphinx documentation,0,2020-10-18T21:26:55Z,2020-12-19T05:08:12Z,2020-10-22T04:55:45Z,,9,2,2,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""sqlite"", ""sphinx"", ""datasette-io"", ""datasette-tool""]",0,2,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# sphinx-to-sqlite
[](https://pypi.org/project/sphinx-to-sqlite/)
[](https://github.com/simonw/sphinx-to-sqlite/releases)
[](https://github.com/simonw/sphinx-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/sphinx-to-sqlite/blob/master/LICENSE)
Create a SQLite database from Sphinx documentation.
## Demo
You can see the results of running this tool against the [Datasette documentation](https://docs.datasette.io/) at https://latest-docs.datasette.io/docs/sections
## Installation
Install this tool using `pip`:
$ pip install sphinx-to-sqlite
## Usage
First run `sphinx-build` with the `-b xml` option to create XML files in your `_build/` directory.
Then run:
$ sphinx-to-sqlite docs.db path/to/_build
To build the SQLite database.
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sphinx-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
sphinx-to-sqlite
Create a SQLite database from Sphinx documentation.
First run sphinx-build with the -b xml option to create XML files in your _build/ directory.
Then run:
$ sphinx-to-sqlite docs.db path/to/_build
To build the SQLite database.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sphinx-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
331720824,MDEwOlJlcG9zaXRvcnkzMzE3MjA4MjQ=,datasette-leaflet,simonw/datasette-leaflet,0,9599,https://github.com/simonw/datasette-leaflet,Datasette plugin adding the Leaflet JavaScript library,0,2021-01-21T18:41:19Z,2021-04-20T16:27:35Z,2021-02-01T22:20:28Z,,124,3,3,JavaScript,1,1,1,1,0,0,0,0,2,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,2,3,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-leaflet
[](https://pypi.org/project/datasette-leaflet/)
[](https://github.com/simonw/datasette-leaflet/releases)
[](https://github.com/simonw/datasette-leaflet/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-leaflet/blob/main/LICENSE)
Datasette plugin adding the [Leaflet](https://leafletjs.com/) JavaScript library.
A growing number of Datasette plugins depend on the Leaflet JavaScript mapping library. They each have their own way of loading Leaflet, which could result in loading it multiple times (with multiple versions) if more than one plugin is installed.
This library is intended to solve this problem, by providing a single plugin they can all depend on that loads Leaflet in a reusable way.
Plugins that use this:
- [datasette-leaflet-freedraw](https://datasette.io/plugins/datasette-leaflet-freedraw)
- [datasette-leaflet-geojson](https://datasette.io/plugins/datasette-leaflet-geojson)
- [datasette-cluster-map](https://datasette.io/plugins/datasette-cluster-map)
## Installation
You can install this plugin like so:
datasette install datasette-leaflet
Usually this plugin will be a dependency of other plugins, so it should be installed automatically when you install them.
## Usage
The plugin makes `leaflet.js` and `leaflet.css` available as static files. It provides two custom template variables with the URLs of those two files.
- `{{ datasette_leaflet_url }}` is the URL to the JavaScript
- `{{ datasette_leaflet_css_url }}` is the URL to the CSS
These URLs are also made available as global JavaScript constants:
- `datasette.leaflet.JAVASCRIPT_URL`
- `datasette.leaflet.CSS_URL`
The JavaScript is packaed as a [JavaScript module](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Modules). You can dynamically import the JavaScript from a custom template like this:
```html+jinja
```
You can load the CSS like this:
```html+jinja
```
Or dynamically like this:
```html+jinja
```
Here's a full example that loads the JavaScript and CSS and renders a map:
```html+jinja
```
","
datasette-leaflet
Datasette plugin adding the Leaflet JavaScript library.
A growing number of Datasette plugins depend on the Leaflet JavaScript mapping library. They each have their own way of loading Leaflet, which could result in loading it multiple times (with multiple versions) if more than one plugin is installed.
This library is intended to solve this problem, by providing a single plugin they can all depend on that loads Leaflet in a reusable way.
<script>
let link =document.createElement('link');link.rel='stylesheet';link.href='{{ datasette_leaflet_css_url }}';document.head.appendChild(link);
</script>
Here's a full example that loads the JavaScript and CSS and renders a map:
",,,,,,
346597557,MDEwOlJlcG9zaXRvcnkzNDY1OTc1NTc=,tableau-to-sqlite,simonw/tableau-to-sqlite,0,9599,https://github.com/simonw/tableau-to-sqlite,Fetch data from Tableau into a SQLite database,0,2021-03-11T06:12:02Z,2021-06-10T04:40:44Z,2021-04-29T16:11:03Z,,212,8,8,Python,1,1,1,1,0,2,0,0,2,apache-2.0,"[""datasette-io"", ""datasette-tool""]",2,2,8,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,2,1,"# tableau-to-sqlite
[](https://pypi.org/project/tableau-to-sqlite/)
[](https://github.com/simonw/tableau-to-sqlite/releases)
[](https://github.com/simonw/tableau-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/tableau-to-sqlite/blob/master/LICENSE)
Fetch data from Tableau into a SQLite database. A wrapper around [TableauScraper](https://github.com/bertrandmartel/tableau-scraping/).
## Installation
Install this tool using `pip`:
$ pip install tableau-to-sqlite
## Usage
If you have the URL to a Tableau dashboard like this:
https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations
You can pass that directly to the tool:
tableau-to-sqlite tableau.db \
https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations
This will create a SQLite database called `tableau.db` containing one table for each of the worksheets in that dashboard.
If the dashboard is hosted on https://public.tableau.com/ you can instead provide the view name. This will be two strings separated by a `/` symbol - something like this:
OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment
Now run the tool like this:
tableau-to-sqlite tableau.db \
OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment
## Get the data as JSON or CSV
If you're building a [git scraper](https://simonwillison.net/2020/Oct/9/git-scraping/) you may want to convert the data gathered by this tool to CSV or JSON to check into your repository.
You can do that using [sqlite-utils](https://sqlite-utils.datasette.io/). Install it using `pip`:
pip install sqlite-utils
You can dump out a table as JSON like so:
sqlite-utils rows tableau.db \
'Admin Site and County Map Site No Info' > tableau.json
Or as CSV like this:
sqlite-utils rows tableau.db --csv \
'Admin Site and County Map Site No Info' > tableau.csv
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd tableau-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
tableau-to-sqlite
Fetch data from Tableau into a SQLite database. A wrapper around TableauScraper.
Installation
Install this tool using pip:
$ pip install tableau-to-sqlite
Usage
If you have the URL to a Tableau dashboard like this:
This will create a SQLite database called tableau.db containing one table for each of the worksheets in that dashboard.
If the dashboard is hosted on https://public.tableau.com/ you can instead provide the view name. This will be two strings separated by a / symbol - something like this: