id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions
168474970,MDEwOlJlcG9zaXRvcnkxNjg0NzQ5NzA=,dbf-to-sqlite,simonw/dbf-to-sqlite,0,9599,https://github.com/simonw/dbf-to-sqlite,"CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite",0,2019-01-31T06:30:46Z,2021-03-23T01:29:41Z,2020-02-16T00:41:20Z,,8,25,25,Python,1,1,1,1,0,8,0,0,3,apache-2.0,"[""sqlite"", ""foxpro"", ""dbf"", ""dbase"", ""datasette-io"", ""datasette-tool""]",8,3,25,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,8,2,"# dbf-to-sqlite
[](https://pypi.python.org/pypi/dbf-to-sqlite)
[](https://travis-ci.com/simonw/dbf-to-sqlite)
[](https://github.com/simonw/dbf-to-sqlite/blob/master/LICENSE)
CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite.
## Installation
pip install dbf-to-sqlite
## Usage
$ dbf-to-sqlite --help
Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB
Convert DBF files (dBase, FoxPro etc) to SQLite
https://github.com/simonw/dbf-to-sqlite
Options:
--version Show the version and exit.
--table TEXT Table name to use (only valid for single files)
-v, --verbose Show what's going on
--help Show this message and exit.
Example usage:
$ dbf-to-sqlite *.DBF database.db
This will create a new SQLite database called `database.db` containing one table for each of the `DBF` files in the current directory.
Looking for DBF files to try this out on? Try downloading the [Himalayan Database](http://himalayandatabase.com/) of all expeditions that have climbed in the Nepal Himalaya.
","
dbf-to-sqlite
CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite.
Installation
pip install dbf-to-sqlite
Usage
$ dbf-to-sqlite --help
Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB
Convert DBF files (dBase, FoxPro etc) to SQLite
https://github.com/simonw/dbf-to-sqlite
Options:
--version Show the version and exit.
--table TEXT Table name to use (only valid for single files)
-v, --verbose Show what's going on
--help Show this message and exit.
Example usage:
$ dbf-to-sqlite *.DBF database.db
This will create a new SQLite database called database.db containing one table for each of the DBF files in the current directory.
Looking for DBF files to try this out on? Try downloading the Himalayan Database of all expeditions that have climbed in the Nepal Himalaya.
",,,,,,
189321671,MDEwOlJlcG9zaXRvcnkxODkzMjE2NzE=,datasette-jq,simonw/datasette-jq,0,9599,https://github.com/simonw/datasette-jq,Datasette plugin that adds a custom SQL function for executing jq expressions against JSON values,0,2019-05-30T01:06:31Z,2020-12-24T17:35:27Z,2020-04-09T05:43:43Z,,11,10,10,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""jq"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,10,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-jq
[](https://pypi.org/project/datasette-jq/)
[](https://circleci.com/gh/simonw/datasette-jq)
[](https://github.com/simonw/datasette-jq/blob/master/LICENSE)
Datasette plugin that adds custom SQL functions for executing [jq](https://stedolan.github.io/jq/) expressions against JSON values.
Install this plugin in the same environment as Datasette to enable the `jq()` SQL function.
Usage:
select jq(
column_with_json,
""{top_3: .classifiers[:3], v: .version}""
)
See [the jq manual](https://stedolan.github.io/jq/manual/#Basicfilters) for full details of supported expression syntax.
## Interactive demo
You can try this plugin out at [datasette-jq-demo.datasette.io](https://datasette-jq-demo.datasette.io/)
Sample query:
select package, ""https://pypi.org/project/"" || package || ""/"" as url,
jq(info, ""{summary: .info.summary, author: .info.author, versions: .releases|keys|reverse}"")
from packages
[Try this query out](https://datasette-jq-demo.datasette.io/demo?sql=select+package%2C+%22https%3A%2F%2Fpypi.org%2Fproject%2F%22+%7C%7C+package+%7C%7C+%22%2F%22+as+url%2C%0D%0Ajq%28info%2C+%22%7Bsummary%3A+.info.summary%2C+author%3A+.info.author%2C+versions%3A+.releases%7Ckeys%7Creverse%7D%22%29%0D%0Afrom+packages) in the interactive demo.
","
datasette-jq
Datasette plugin that adds custom SQL functions for executing jq expressions against JSON values.
Install this plugin in the same environment as Datasette to enable the jq() SQL function.
",,,,,,
209091256,MDEwOlJlcG9zaXRvcnkyMDkwOTEyNTY=,datasette-atom,simonw/datasette-atom,0,9599,https://github.com/simonw/datasette-atom,Datasette plugin that adds a .atom output format,0,2019-09-17T15:31:01Z,2021-03-26T02:06:51Z,2021-01-24T23:59:36Z,,47,10,10,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,0,10,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-atom
[](https://pypi.org/project/datasette-atom/)
[](https://github.com/simonw/datasette-atom/releases)
[](https://github.com/simonw/datasette-atom/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-atom/blob/main/LICENSE)
Datasette plugin that adds support for generating [Atom feeds](https://validator.w3.org/feed/docs/atom.html) with the results of a SQL query.
## Installation
Install this plugin in the same environment as Datasette to enable the `.atom` output extension.
$ pip install datasette-atom
## Usage
To create an Atom feed you need to define a custom SQL query that returns a required set of columns:
* `atom_id` - a unique ID for each row. [This article](https://web.archive.org/web/20080211143232/http://diveintomark.org/archives/2004/05/28/howto-atom-id) has suggestions about ways to create these IDs.
* `atom_title` - a title for that row.
* `atom_updated` - an [RFC 3339](http://www.faqs.org/rfcs/rfc3339.html) timestamp representing the last time the entry was modified in a significant way. This can usually be the time that the row was created.
The following columns are optional:
* `atom_content` - content that should be shown in the feed. This will be treated as a regular string, so any embedded HTML tags will be escaped when they are displayed.
* `atom_content_html` - content that should be shown in the feed. This will be treated as an HTML string, and will be sanitized using [Bleach](https://github.com/mozilla/bleach) to ensure it does not have any malicious code in it before being returned as part of a `` Atom element. If both are provided, this will be used in place of `atom_content`.
* `atom_link` - a URL that should be used as the link that the feed entry points to.
* `atom_author_name` - the name of the author of the entry. If you provide this you can also provide `atom_author_uri` and `atom_author_email` with a URL and e-mail address for that author.
A query that returns these columns can then be returned as an Atom feed by adding the `.atom` extension.
## Example
Here is an example SQL query which generates an Atom feed for new entries on [www.niche-museums.com](https://www.niche-museums.com/):
```sql
select
'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id,
name as atom_title,
created as atom_updated,
'https://www.niche-museums.com/browse/museums/' || id as atom_link,
coalesce(
'',
''
) || '
' || description || '
' as atom_content_html
from
museums
order by
created desc
limit
15
```
You can try this query by [pasting it in here](https://www.niche-museums.com/browse) - then click the `.atom` link to see it as an Atom feed.
## Using a canned query
Datasette's [canned query mechanism](https://docs.datasette.io/en/stable/sql_queries.html#canned-queries) is a useful way to configure feeds. If a canned query definition has a `title` that will be used as the title of the Atom feed.
Here's an example, defined using a `metadata.yaml` file:
```yaml
databases:
browse:
queries:
feed:
title: Niche Museums
sql: |-
select
'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id,
name as atom_title,
created as atom_updated,
'https://www.niche-museums.com/browse/museums/' || id as atom_link,
coalesce(
'',
''
) || '
' || description || '
' as atom_content_html
from
museums
order by
created desc
limit
15
```
## Disabling HTML filtering
The HTML allow-list used by Bleach for the `atom_content_html` column can be found in the `clean(html)` function at the bottom of [datasette_atom/__init__.py](https://github.com/simonw/datasette-atom/blob/main/datasette_atom/__init__.py).
You can disable Bleach entirely for Atom feeds generated using a canned query. You should only do this if you are certain that no user-provided HTML could be included in that value.
Here's how to do that in `metadata.json`:
```json
{
""plugins"": {
""datasette-atom"": {
""allow_unsafe_html_in_canned_queries"": true
}
}
}
```
Setting this to `true` will disable Bleach filtering for all canned queries across all databases.
You can disable Bleach filtering just for a specific list of canned queries like so:
```json
{
""plugins"": {
""datasette-atom"": {
""allow_unsafe_html_in_canned_queries"": {
""museums"": [""latest"", ""moderation""]
}
}
}
}
```
This will disable Bleach just for the canned queries called `latest` and `moderation` in the `museums.db` database.
","
datasette-atom
Datasette plugin that adds support for generating Atom feeds with the results of a SQL query.
Installation
Install this plugin in the same environment as Datasette to enable the .atom output extension.
$ pip install datasette-atom
Usage
To create an Atom feed you need to define a custom SQL query that returns a required set of columns:
atom_id - a unique ID for each row. This article has suggestions about ways to create these IDs.
atom_title - a title for that row.
atom_updated - an RFC 3339 timestamp representing the last time the entry was modified in a significant way. This can usually be the time that the row was created.
The following columns are optional:
atom_content - content that should be shown in the feed. This will be treated as a regular string, so any embedded HTML tags will be escaped when they are displayed.
atom_content_html - content that should be shown in the feed. This will be treated as an HTML string, and will be sanitized using Bleach to ensure it does not have any malicious code in it before being returned as part of a <content type=""html""> Atom element. If both are provided, this will be used in place of atom_content.
atom_link - a URL that should be used as the link that the feed entry points to.
atom_author_name - the name of the author of the entry. If you provide this you can also provide atom_author_uri and atom_author_email with a URL and e-mail address for that author.
A query that returns these columns can then be returned as an Atom feed by adding the .atom extension.
Example
Here is an example SQL query which generates an Atom feed for new entries on www.niche-museums.com:
select'tag:niche-museums.com,'|| substr(created, 0, 11) ||':'|| id as atom_id,
name as atom_title,
created as atom_updated,
'https://www.niche-museums.com/browse/museums/'|| id as atom_link,
coalesce(
'<img src=""'|| photo_url ||'?w=800&h=400&fit=crop&auto=compress"">',
''
) ||'<p>'|| description ||'</p>'as atom_content_html
from
museums
order by
created desclimit15
You can try this query by pasting it in here - then click the .atom link to see it as an Atom feed.
Using a canned query
Datasette's canned query mechanism is a useful way to configure feeds. If a canned query definition has a title that will be used as the title of the Atom feed.
Here's an example, defined using a metadata.yaml file:
databases:
browse:
queries:
feed:
title: Niche Museumssql: |- select 'tag:niche-museums.com,' || substr(created, 0, 11) || ':' || id as atom_id, name as atom_title, created as atom_updated, 'https://www.niche-museums.com/browse/museums/' || id as atom_link, coalesce( '<img src=""' || photo_url || '?w=800&h=400&fit=crop&auto=compress"">', '' ) || '<p>' || description || '</p>' as atom_content_html from museums order by created desc limit 15
Disabling HTML filtering
The HTML allow-list used by Bleach for the atom_content_html column can be found in the clean(html) function at the bottom of datasette_atom/init.py.
You can disable Bleach entirely for Atom feeds generated using a canned query. You should only do this if you are certain that no user-provided HTML could be included in that value.
This will disable Bleach just for the canned queries called latest and moderation in the museums.db database.
",,,,,,
209590345,MDEwOlJlcG9zaXRvcnkyMDk1OTAzNDU=,genome-to-sqlite,dogsheep/genome-to-sqlite,0,53015001,https://github.com/dogsheep/genome-to-sqlite,Import your genome into a SQLite database,0,2019-09-19T15:38:39Z,2021-01-18T19:39:48Z,2019-09-19T15:41:17Z,,9,13,13,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""genetics"", ""sqlite"", ""23andme"", ""personal-analytics"", ""datasette"", ""dogsheep"", ""datasette-io"", ""datasette-tool""]",0,2,13,master,"{""admin"": false, ""push"": false, ""pull"": false}",,53015001,0,2,"# genome-to-sqlite
[](https://pypi.org/project/genome-to-sqlite/)
[](https://circleci.com/gh/dogsheep/genome-to-sqlite)
[](https://github.com/dogsheep/genome-to-sqlite/blob/master/LICENSE)
Import your genome into a SQLite database.
## How to install
$ pip install genome-to-sqlite
## How to use
First, export your genome. This tool has only been tested against 23andMe so far. You can request an export of your genome from https://you.23andme.com/tools/data/download/
Now you can convert the resulting `export.zip` file to SQLite like so:
$ genome-to-sqlite export.zip genome.db
A progress bar will be displayed. You can disable this using `--silent`.
```
Importing genome [#----------------] 5% 00:01:33
```
You can explore the resulting data using [Datasette](https://datasette.readthedocs.io/) like this:
$ datasette genome.db --config facet_time_limit_ms:1000
Bumping up the facet time limit is useful in order to enable faceting by chromosome:
http://127.0.0.1:8001/genome/genome?_facet=chromosome&_sort=position
","
",,,,,,
214299267,MDEwOlJlcG9zaXRvcnkyMTQyOTkyNjc=,datasette-render-timestamps,simonw/datasette-render-timestamps,0,9599,https://github.com/simonw/datasette-render-timestamps,Datasette plugin for rendering timestamps,0,2019-10-10T22:50:50Z,2020-10-17T11:09:42Z,2020-03-22T17:57:17Z,,17,4,4,Python,1,1,1,1,0,1,0,0,0,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",1,0,4,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,1,2,"# datasette-render-timestamps
[](https://pypi.org/project/datasette-render-timestamps/)
[](https://circleci.com/gh/simonw/datasette-render-timestamps)
[](https://github.com/simonw/datasette-render-timestamps/blob/master/LICENSE)
Datasette plugin for rendering timestamps.
## Installation
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-render-timestamps
The plugin will then look out for integer numbers that are likely to be timestamps - anything that would be a number of seconds from 5 years ago to 5 years in the future.
These will then be rendered in a more readable format.
## Configuration
You can disable automatic column detection in favour of explicitly listing the columns that you would like to render using [plugin configuration](https://datasette.readthedocs.io/en/stable/plugins.html#plugin-configuration) in a `metadata.json` file.
Add a `""datasette-render-timestamps""` configuration block and use a `""columns""` key to list the columns you would like to treat as timestamp values:
```json
{
""plugins"": {
""datasette-render-timestamps"": {
""columns"": [""created"", ""updated""]
}
}
}
```
This will cause any `created` or `updated` columns in any table to be treated as timestamps and rendered.
Save this to `metadata.json` and run datasette with the `--metadata` flag to load this configuration:
datasette serve mydata.db --metadata metadata.json
To disable automatic timestamp detection entirely, you can use `""columnns"": []`.
This configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the `entries` table in the `news.db` database:
```json
{
""databases"": {
""news"": {
""tables"": {
""entries"": {
""plugins"": {
""datasette-render-timestamps"": {
""columns"": [""created"", ""updated""]
}
}
}
}
}
}
}
```
And here's how to apply it to every `created` column in every table in the `news.db` database:
```json
{
""databases"": {
""news"": {
""plugins"": {
""datasette-render-timestamps"": {
""columns"": [""created"", ""updated""]
}
}
}
}
}
```
### Customizing the date format
The default format is `%B %d, %Y - %H:%M:%S UTC` which renders for example: `October 10, 2019 - 07:18:29 UTC`. If you want another format, the date format can be customized using plugin configuration. Any format string supported by [strftime](http://strftime.org/) may be used. For example:
```json
{
""plugins"": {
""datasette-render-timestamps"": {
""format"": ""%Y-%m-%d-%H:%M:%S""
}
}
}
```
","
datasette-render-timestamps
Datasette plugin for rendering timestamps.
Installation
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-render-timestamps
The plugin will then look out for integer numbers that are likely to be timestamps - anything that would be a number of seconds from 5 years ago to 5 years in the future.
These will then be rendered in a more readable format.
Configuration
You can disable automatic column detection in favour of explicitly listing the columns that you would like to render using plugin configuration in a metadata.json file.
Add a ""datasette-render-timestamps"" configuration block and use a ""columns"" key to list the columns you would like to treat as timestamp values:
To disable automatic timestamp detection entirely, you can use ""columnns"": [].
This configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the entries table in the news.db database:
The default format is %B %d, %Y - %H:%M:%S UTC which renders for example: October 10, 2019 - 07:18:29 UTC. If you want another format, the date format can be customized using plugin configuration. Any format string supported by strftime may be used. For example:
",,,,,,
245670670,MDEwOlJlcG9zaXRvcnkyNDU2NzA2NzA=,fec-to-sqlite,simonw/fec-to-sqlite,0,9599,https://github.com/simonw/fec-to-sqlite,Save FEC campaign finance data to a SQLite database,0,2020-03-07T16:52:49Z,2020-12-19T05:09:05Z,2020-03-07T18:21:48Z,,16,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""sqlite"", ""fec"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,1,8,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# fec-to-sqlite
[](https://pypi.org/project/fec-to-sqlite/)
[](https://circleci.com/gh/simonw/fec-to-sqlite)
[](https://github.com/simonw/fec-to-sqlite/blob/master/LICENSE)
Create a SQLite database using FEC campaign contributions data.
This tool builds on [fecfile](https://github.com/esonderegger/) by Evan Sonderegger.
## How to install
$ pip install fec-to-sqlite
## Usage
$ fec-to-sqlite filings filings.db 1146148
This fetches the filing with ID `1146148` and stores it in tables in a SQLite database called `filings.db`. It will create any tables it needs.
You can pass more than one filing ID, separated by spaces.
","
fec-to-sqlite
Create a SQLite database using FEC campaign contributions data.
This fetches the filing with ID 1146148 and stores it in tables in a SQLite database called filings.db. It will create any tables it needs.
You can pass more than one filing ID, separated by spaces.
",,,,,,
274264484,MDEwOlJlcG9zaXRvcnkyNzQyNjQ0ODQ=,sqlite-generate,simonw/sqlite-generate,0,9599,https://github.com/simonw/sqlite-generate,Tool for generating demo SQLite databases,0,2020-06-22T23:36:44Z,2021-02-27T15:25:26Z,2021-02-27T15:25:24Z,https://sqlite-generate-demo.datasette.io/,56,17,17,Python,1,1,1,1,0,0,0,0,0,apache-2.0,"[""sqlite"", ""datasette-io"", ""datasette-tool""]",0,0,17,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# sqlite-generate
[](https://pypi.org/project/sqlite-generate/)
[](https://github.com/simonw/sqlite-generate/releases)
[](https://github.com/simonw/sqlite-generate/blob/master/LICENSE)
Tool for generating demo SQLite databases
## Installation
Install this plugin using `pip`:
$ pip install sqlite-generate
## Demo
You can see a demo of the database generated using this command running in [Datasette](https://github.com/simonw/datasette) at https://sqlite-generate-demo.datasette.io/
The demo is generated using the following command:
sqlite-generate demo.db --seed seed --fts --columns=10 --fks=0,3 --pks=0,2
## Usage
To generate a SQLite database file called `data.db` with 10 randomly named tables in it, run the following:
sqlite-generate data.db
You can use the `--tables` option to generate a different number of tables:
sqlite-generate data.db --tables 20
You can run the command against the same database file multiple times to keep adding new tables, using different settings for each batch of generated tables.
By default each table will contain a random number of rows between 0 and 200. You can customize this with the `--rows` option:
sqlite-generate data.db --rows 20
This will insert 20 rows into each table.
sqlite-generate data.db --rows 500,2000
This inserts a random number of rows between 500 and 2000 into each table.
Each table will have 5 columns. You can change this using `--columns`:
sqlite-generate data.db --columns 10
`--columns` can also accept a range:
sqlite-generate data.db --columns 5,15
You can control the random number seed used with the `--seed` option. This will result in the exact same database file being created by multiple runs of the tool:
sqlite-generate data.db --seed=myseed
By default each table will contain between 0 and 2 foreign key columns to other tables. You can control this using the `--fks` option, with either a single number or a range:
sqlite-generate data.db --columns=20 --fks=5,15
Each table will have a single primary key column called `id`. You can use the `--pks=` option to change the number of primary key columns on each table. Drop it to 0 to generate [rowid tables](https://www.sqlite.org/rowidtable.html). Increase it above 1 to generate tables with compound primary keys. Or use a range to get a random selection of different primary key layouts:
sqlite-generate data.db --pks=0,2
To configure [SQLite full-text search](https://www.sqlite.org/fts5.html) for all columns of type text, use `--fts`:
sqlite-generate data.db --fts
This will use FTS5 by default. To use [FTS4](https://www.sqlite.org/fts3.html) instead, use `--fts4`.
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sqlite-generate
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
To generate a SQLite database file called data.db with 10 randomly named tables in it, run the following:
sqlite-generate data.db
You can use the --tables option to generate a different number of tables:
sqlite-generate data.db --tables 20
You can run the command against the same database file multiple times to keep adding new tables, using different settings for each batch of generated tables.
By default each table will contain a random number of rows between 0 and 200. You can customize this with the --rows option:
sqlite-generate data.db --rows 20
This will insert 20 rows into each table.
sqlite-generate data.db --rows 500,2000
This inserts a random number of rows between 500 and 2000 into each table.
Each table will have 5 columns. You can change this using --columns:
sqlite-generate data.db --columns 10
--columns can also accept a range:
sqlite-generate data.db --columns 5,15
You can control the random number seed used with the --seed option. This will result in the exact same database file being created by multiple runs of the tool:
sqlite-generate data.db --seed=myseed
By default each table will contain between 0 and 2 foreign key columns to other tables. You can control this using the --fks option, with either a single number or a range:
sqlite-generate data.db --columns=20 --fks=5,15
Each table will have a single primary key column called id. You can use the --pks= option to change the number of primary key columns on each table. Drop it to 0 to generate rowid tables. Increase it above 1 to generate tables with compound primary keys. Or use a range to get a random selection of different primary key layouts:
This will use FTS5 by default. To use FTS4 instead, use --fts4.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sqlite-generate
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
305199661,MDEwOlJlcG9zaXRvcnkzMDUxOTk2NjE=,sphinx-to-sqlite,simonw/sphinx-to-sqlite,0,9599,https://github.com/simonw/sphinx-to-sqlite,Create a SQLite database from Sphinx documentation,0,2020-10-18T21:26:55Z,2020-12-19T05:08:12Z,2020-10-22T04:55:45Z,,9,2,2,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""sqlite"", ""sphinx"", ""datasette-io"", ""datasette-tool""]",0,2,2,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# sphinx-to-sqlite
[](https://pypi.org/project/sphinx-to-sqlite/)
[](https://github.com/simonw/sphinx-to-sqlite/releases)
[](https://github.com/simonw/sphinx-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/simonw/sphinx-to-sqlite/blob/master/LICENSE)
Create a SQLite database from Sphinx documentation.
## Demo
You can see the results of running this tool against the [Datasette documentation](https://docs.datasette.io/) at https://latest-docs.datasette.io/docs/sections
## Installation
Install this tool using `pip`:
$ pip install sphinx-to-sqlite
## Usage
First run `sphinx-build` with the `-b xml` option to create XML files in your `_build/` directory.
Then run:
$ sphinx-to-sqlite docs.db path/to/_build
To build the SQLite database.
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sphinx-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
sphinx-to-sqlite
Create a SQLite database from Sphinx documentation.
First run sphinx-build with the -b xml option to create XML files in your _build/ directory.
Then run:
$ sphinx-to-sqlite docs.db path/to/_build
To build the SQLite database.
Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd sphinx-to-sqlite
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
",,,,,,
327087207,MDEwOlJlcG9zaXRvcnkzMjcwODcyMDc=,datasette-css-properties,simonw/datasette-css-properties,0,9599,https://github.com/simonw/datasette-css-properties,Experimental Datasette output plugin using CSS properties,0,2021-01-05T18:38:07Z,2021-01-12T17:43:11Z,2021-01-07T22:07:19Z,,10,12,12,Python,1,1,1,1,0,0,0,0,1,,"[""datasette-plugin"", ""datasette-io""]",0,1,12,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,2,"# datasette-css-properties
[](https://pypi.org/project/datasette-css-properties/)
[](https://github.com/simonw/datasette-css-properties/releases)
[](https://github.com/simonw/datasette-css-properties/actions?query=workflow%3ATest)
[](https://github.com/simonw/datasette-css-properties/blob/main/LICENSE)
Extremely experimental Datasette output plugin using CSS properties, inspired by [Custom Properties as State](https://css-tricks.com/custom-properties-as-state/) by Chris Coyier.
More about this project: [APIs from CSS without JavaScript: the datasette-css-properties plugin](https://simonwillison.net/2021/Jan/7/css-apis-no-javascript/)
## Installation
Install this plugin in the same environment as Datasette.
$ datasette install datasette-css-properties
## Usage
Once installed, this plugin adds a `.css` output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property:
Example: https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css produces:
```css
:root {
--pk: '1';
--name: 'The Mystery Spot';
--address: '465 Mystery Spot Road, Santa Cruz, CA 95065';
--latitude: '37.0167';
--longitude: '-122.0024';
}
```
If you link this stylesheet to your page you can then do things like this;
```html
Attraction name:
```
Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the `?_raw=column-name` parameter. This can be passed multiple times.
Consider [this example query](https://latest-with-plugins.datasette.io/github?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B):
```sql
select
'#' || substr(sha, 0, 6) as [custom-bg]
from
commits
order by
author_date desc
limit
1;
```
This returns the first 6 characters of the most recently authored commit with a `#` prefix. The `.css` [output rendered version](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B) looks like this:
```css
:root {
--custom-bg: '#97fb1';
}
```
Adding `?_raw=custom-bg` to the URL produces [this instead](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B&_raw=custom-bg):
```css
:root {
--custom-bg: #97fb1;
}
```
This can then be used as a color value like so:
```css
h1 {
background-color: var(--custom-bg);
}
```
## Development
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-css-properties
python3 -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
datasette-css-properties
Extremely experimental Datasette output plugin using CSS properties, inspired by Custom Properties as State by Chris Coyier.
Install this plugin in the same environment as Datasette.
$ datasette install datasette-css-properties
Usage
Once installed, this plugin adds a .css output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property:
Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the ?_raw=column-name parameter. This can be passed multiple times.