datasette-pretty-json
Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays.
You may also be interested in datasette-json-html.
id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions 167730071,MDEwOlJlcG9zaXRvcnkxNjc3MzAwNzE=,datasette-pretty-json,simonw/datasette-pretty-json,0,9599,https://github.com/simonw/datasette-pretty-json,Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays,0,2019-01-26T19:30:43Z,2022-09-24T06:13:11Z,2022-09-28T21:06:31Z,,14,8,8,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-io"", ""datasette-plugin"", ""json""]",0,1,8,master,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,2,"# datasette-pretty-json [](https://pypi.org/project/datasette-pretty-json/) [](https://github.com/simonw/datasette-pretty-json/releases) [](https://github.com/simonw/datasette-pretty-json/actions?query=workflow%3ATest) [](https://github.com/simonw/datasette-pretty-json/blob/main/LICENSE) [Datasette](https://github.com/simonw/datasette) plugin that pretty-prints any column values that are valid JSON objects or arrays. You may also be interested in [datasette-json-html](https://github.com/simonw/datasette-json-html). ","
Datasette plugin that pretty-prints any column values that are valid JSON objects or arrays.
You may also be interested in datasette-json-html.
This Datasette plugin lets you configure Datasette to render specific columns as HTML in the table and row interfaces.
This means you can store HTML in those columns and have it rendered as such on those pages.
If you have a database called docs.db containing a glossary table and you want the definition column in that table to be rendered as HTML, you would use a metadata.json file that looks like this:
{
""databases"": {
""docs"": {
""tables"": {
""glossary"": {
""plugins"": {
""datasette-render-html"": {
""columns"": [""definition""]
}
}
}
}
}
}
}
This plugin allows HTML to be rendered exactly as it is stored in the database. As such, you should be sure only to use this against columns with content that you trust - otherwise you could open yourself up to an XSS attack.
It's possible to configure this plugin to apply to columns with specific names across whole databases or the full Datasette instance, but doing so is not safe. It could open you up to XSS vulnerabilities where an attacker composes a SQL query that results in a column containing unsafe HTML.
As such, you should only use this plugin against specific columns in specific tables, as shown in the example above.
Datasette plugin for working with Apple's binary plist format.
This plugin adds two features: a display hook and a SQL function.
The display hook will detect any database values that are encoded using the binary plist format. It will decode them, convert them into JSON and display them pretty-printed in the Datasette UI.
The SQL function bplist_to_json(value) can be used inside a SQL query to convert a binary plist value into a JSON string. This can then be used with SQLite's json_extract() function or with the datasette-jq plugin to further analyze that data as part of a SQL query.
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-bplist
If you use a Mac you already have plenty of SQLite databases that contain binary plist data.
One example is the database that powers the Apple Photos app.
This database tends to be locked, so you will need to create a copy of the database in order to run queries against it:
cp ~/Pictures/Photos\ Library.photoslibrary/database/photos.db /tmp/photos.db
The database also makes use of custom SQLite extensions which prevent it from opening in Datasette.
You can work around this by exporting the data that you want to experiment with into a new SQLite file.
I recommend trying this plugin against the RKMaster_dataNote table, which contains plist-encoded EXIF metadata about the photos you have taken.
You can export that table into a fresh database like so:
sqlite3 /tmp/photos.db "".dump RKMaster_dataNote"" | sqlite3 /tmp/exif.db
Now run datasette /tmp/exif.db and you can start trying out the plugin.
Once you have the exif.db demo working, you can try the bplist_to_json() SQL function.
Here's a query that shows the camera lenses you have used the most often to take photos:
select
json_extract(
bplist_to_json(value),
""$.{Exif}.LensModel""
) as lens,
count(*) as n
from RKMaster_dataNote
group by lens
order by n desc;
If you have a large number of photos this query can take a long time to execute, so you may need to increase the SQL time limit enforced by Datasette like so:
$ datasette /tmp/exif.db \
--config sql_time_limit_ms:10000
Here's another query, showing the time at which you took every photo in your library which is classified as as screenshot:
select
attachedToId,
json_extract(
bplist_to_json(value),
""$.{Exif}.DateTimeOriginal""
)
from RKMaster_dataNote
where
json_extract(
bplist_to_json(value),
""$.{Exif}.UserComment""
) = ""Screenshot""
And if you install the datasette-cluster-map plugin, this query will show you a map of your most recent 1000 photos:
select
*,
json_extract(
bplist_to_json(value),
""$.{GPS}.Latitude""
) as latitude,
-json_extract(
bplist_to_json(value),
""$.{GPS}.Longitude""
) as longitude,
json_extract(
bplist_to_json(value),
""$.{Exif}.DateTimeOriginal""
) as datetime
from
RKMaster_dataNote
where
latitude is not null
order by
attachedToId desc
Datasette plugin for rendering binary data.
Install this plugin in the same environment as Datasette to enable this new functionality:
pip install datasette-render-binary
Binary data in cells will now be rendered as a mixture of characters and octets.
Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors
You can use this plugin to allow JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance.
pip install datasette-cors
You need to add some configuration to your Datasette metadata.json file for this plugin to take effect.
To whitelist specific domains, use this:
{
""plugins"": {
""datasette-cors"": {
""hosts"": [""https://www.example.com""]
}
}
}You can also whitelist patterns like this:
{
""plugins"": {
""datasette-cors"": {
""host_wildcards"": [""https://*.example.com""]
}
}
}To test this plugin out, run it locally by saving one of the above examples as metadata.json and running this:
$ datasette --memory -m metadata.json
Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following:
fetch(""http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29"").then(r => r.json()).then(console.log)
If the plugin is running correctly, you will see the JSON response output to the console.
Create a SQLite database containing your checkin history from Foursquare Swarm.
$ pip install swarm-to-sqlite
You will need to first obtain a valid OAuth token for your Foursquare account. You can do so using this tool: https://your-foursquare-oauth-token.glitch.me/
Simplest usage is to simply provide the name of the database file you wish to write to. The tool will prompt you to paste in your token, and will then download your checkins and store them in the specified database file.
$ swarm-to-sqlite checkins.db
Please provide your Foursquare OAuth token:
Importing 3699 checkins [#########-----------------------] 27% 00:02:31
You can also pass the token as a command-line option:
$ swarm-to-sqlite checkins.db --token=XXX
Or as an environment variable:
$ export FOURSQUARE_TOKEN=XXX
$ swarm-to-sqlite checkins.db
To retrieve just checkins within the past X hours, days or weeks, use the --since= option. For example, to pull only checkins that happened within the last 10 days use:
$ swarm-to-sqlite checkins.db --token=XXX --since=10d
Use 2w for two weeks, 10h for ten hours, 3d for three days.
In addition to saving the checkins to a database, you can also write them to a JSON file using the --save option:
$ swarm-to-sqlite checkins.db --save=checkins.json
Having done this, you can re-import checkins directly from that file (rather than making API calls to fetch data from Foursquare) like this:
$ swarm-to-sqlite checkins.db --load=checkins.json
The SQLite database produced by this tool is designed to be browsed using Datasette.
You can install the datasette-cluster-map plugin to view your checkins on a map.
Datasette plugin for rendering Markdown.
Install this plugin in the same environment as Datasette to enable this new functionality:
$ pip install datasette-render-markdown
You can explicitly list the columns you would like to treat as Markdown using plugin configuration in a metadata.json file.
Add a ""datasette-render-markdown"" configuration block and use a ""columns"" key to list the columns you would like to treat as Markdown values:
{
""plugins"": {
""datasette-render-markdown"": {
""columns"": [""body""]
}
}
}This will cause any body column in any table to be treated as markdown and safely rendered using Python-Markdown. The resulting HTML is then run through Bleach to avoid the risk of XSS security problems.
Save this to metadata.json and run Datasette with the --metadata flag to load this configuration:
$ datasette serve mydata.db --metadata metadata.json
The configuration block can be used at the top level, or it can be applied just to specific databases or tables. Here's how to apply it to just the entries table in the news.db database:
{
""databases"": {
""news"": {
""tables"": {
""entries"": {
""plugins"": {
""datasette-render-markdown"": {
""columns"": [""body""]
}
}
}
}
}
}
}And here's how to apply it to every body column in every table in the news.db database:
{
""databases"": {
""news"": {
""plugins"": {
""datasette-render-markdown"": {
""columns"": [""body""]
}
}
}
}
}This plugin can also render markdown in any columns that match a specific naming convention.
By default, columns that have a name ending in _markdown will be rendered.
You can try this out using the following query:
select '# Hello there * This is a list * of items [And a link](https://github.com/simonw/datasette-render-markdown).' as demo_markdown
You can configure a different list of wildcard patterns using the ""patterns"" configuration key. Here's how to render columns that end in either _markdown or _md:
{
""plugins"": {
""datasette-render-markdown"": {
""patterns"": [""*_markdown"", ""*_md""]
}
}
}To disable wildcard column matching entirely, set ""patterns"": [] in your plugin metadata configuration.
The Python-Markdown library that powers this plugin supports extensions, both bundled and third-party. These can be used to enable additional Markdown features such as table support.
You can configure support for extensions using the ""extensions"" key in your plugin metadata configuration.
Since extensions may introduce new HTML tags, you will also need to add those tags to the list of tags that are allowed by the Bleach sanitizer. You can do that using the ""extra_tags"" key, and you can whitelist additional HTML attributes using ""extra_attrs"". See the Bleach documentation for more information on this.
Here's how to enable support for Markdown tables:
{
""plugins"": {
""datasette-render-markdown"": {
""extensions"": [""tables""],
""extra_tags"": [""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""]
}
}
}Enabling GitHub-Flavored Markdown (useful for if you are working with data imported from GitHub using github-to-sqlite) is a little more complicated.
First, you will need to install the py-gfm package:
$ pip install py-gfm
Note that py-gfm has a bug that causes it to pin to Markdown<3.0 - so if you are using it you should install it before installing datasette-render-markdown to ensure you get a compatibly version of that dependency.
Now you can configure it like this. Note that the extension name is mdx_gfm:GithubFlavoredMarkdownExtension and you need to whitelist several extra HTML tags and attributes:
{
""plugins"": {
""datasette-render-markdown"": {
""extra_tags"": [
""hr"",
""br"",
""details"",
""summary"",
""input""
],
""extra_attrs"": {
""input"": [
""type"",
""disabled"",
""checked""
],
},
""extensions"": [
""mdx_gfm:GithubFlavoredMarkdownExtension""
]
}
}
}The <input type="""" checked disabled> attributes are needed to support rendering checkboxes in issue descriptions.
The plugin also adds a new template function: render_markdown(value). You can use this in your templates like so:
{{ render_markdown(""""""
# This is markdown
* One
* Two
* Three
"""""") }}You can load additional extensions and whitelist tags by passing extra arguments to the function like this:
{{ render_markdown(""""""
## Markdown table
First Header | Second Header
------------- | -------------
Content Cell | Content Cell
Content Cell | Content Cell
"""""", extensions=[""tables""],
extra_tags=[""table"", ""thead"", ""tr"", ""th"", ""td"", ""tbody""])) }}{{ article.date }}
{{ article.summary }}
{% endfor %} ``` ","Datasette plugin for executing SQL queries from templates.
datasette.io uses this plugin extensively with custom page templates, check out simonw/datasette.io to see how it works.
www.niche-museums.com uses this plugin to run a custom themed website on top of Datasette. The full source code for the site is here - see also niche-museums.com, powered by Datasette.
simonw/til is another simple example, described in Using a self-rewriting README powered by GitHub Actions to track TILs.
Run this command to install the plugin in the same environment as Datasette:
$ pip install datasette-template-sql
This plugin makes a new function, sql(sql_query), available to your Datasette templates.
You can use it like this:
{% for row in sql(""select 1 + 1 as two, 2 * 4 as eight"") %} {% for key in row.keys() %} {{ key }}: {{ row[key] }}<br> {% endfor %} {% endfor %}
The plugin will execute SQL against the current database for the page in database.html, table.html and row.html templates. If a template does not have a current database (index.html for example) the query will execute against the first attached database.
You can construct a SQL query using ? or :name parameter syntax by passing a list or dictionary as a second argument:
{% for row in sql(""select distinct topic from til order by topic"") %} <h2>{{ row.topic }}</h2> <ul> {% for til in sql(""select * from til where topic = ?"", [row.topic]) %} <li><a href=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li> {% endfor %} </ul> {% endfor %}
Here's the same example using the :topic style of parameters:
{% for row in sql(""select distinct topic from til order by topic"") %} <h2>{{ row.topic }}</h2> <ul> {% for til in sql(""select * from til where topic = :topic"", {""topic"": row.topic}) %} <li><a href=""{{ til.url }}"">{{ til.title }}</a> - {{ til.created[:10] }}</li> {% endfor %} </ul> {% endfor %}
You can pass an optional database= argument to specify a named database to use for the query. For example, if you have attached a news.db database you could use this:
{% for article in sql( ""select headline, date, summary from articles order by date desc limit 5"", database=""news"" ) %} <h3>{{ article.headline }}</h2> <p class=""date"">{{ article.date }}</p> <p>{{ article.summary }}</p> {% endfor %}
Create a SQLite database using FEC campaign contributions data.
This tool builds on fecfile by Evan Sonderegger.
$ pip install fec-to-sqlite
$ fec-to-sqlite filings filings.db 1146148
This fetches the filing with ID 1146148 and stores it in tables in a SQLite database called filings.db. It will create any tables it needs.
You can pass more than one filing ID, separated by spaces.
Datasette plugin for displaying error tracebacks.
This plugin does not work with current versions of Datasette. See issue #2.
pip install datasette-show-errors
Installing the plugin will cause any internal error to be displayed with a full traceback, rather than just a generic 500 page.
Be careful not to use this in a context that might expose sensitive information.
Datasette plugin adding a /-/psutil debugging endpoint
Install this plugin in the same environment as Datasette.
$ pip install datasette-psutil
Visit /-/psutil on your Datasette instance to see various information provided by psutil.
https://latest-with-plugins.datasette.io/-/psutil is a live demo of this plugin, hosted on Google Cloud Run.
Datasette plugin that lets users save and execute queries
Install this plugin in the same environment as Datasette.
$ pip install datasette-saved-queries
When the plugin is installed Datasette will automatically create a saved_queries table in the first connected database when it starts up.
It also creates a save_query writable canned query which you can use to save new queries.
Queries that you save will be added to the query list on the database page.
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-saved-queries
python -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
Datasette plugin for inserting and updating data
Install this plugin in the same environment as Datasette.
$ pip install datasette-insert
This plugin should always be deployed with additional configuration to prevent unauthenticated access, see notes below.
If you are trying it out on your own local machine, you can pip install the datasette-insert-unsafe plugin to allow access without needing to set up authentication or permissions separately.
Start datasette and make sure it has a writable SQLite database attached to it. If you have not yet created a database file you can use this:
datasette data.db --create
The --create option will create a new empty data.db database file if it does not already exist.
The plugin adds an endpoint that allows data to be inserted or updated and tables to be created by POSTing JSON data to the following URL:
/-/insert/name-of-database/name-of-table
The JSON should look like this:
[
{
""id"": 1,
""name"": ""Cleopaws"",
""age"": 5
},
{
""id"": 2,
""name"": ""Pancakes"",
""age"": 5
}
]The first time data is posted to the URL a table of that name will be created if it does not aready exist, with the desired columns.
You can specify which column should be used as the primary key using the ?pk= URL argument.
Here's how to POST to a database and create a new table using the Python requests library:
import requests requests.post(""http://localhost:8001/-/insert/data/dogs?pk=id"", json=[ { ""id"": 1, ""name"": ""Cleopaws"", ""age"": 5 }, { ""id"": 2, ""name"": ""Pancakes"", ""age"": 4 } ])
And here's how to do the same thing using curl:
curl --request POST \
--data '[
{
""id"": 1,
""name"": ""Cleopaws"",
""age"": 5
},
{
""id"": 2,
""name"": ""Pancakes"",
""age"": 4
}
]' \
'http://localhost:8001/-/insert/data/dogs?pk=id'
Or by piping in JSON like so:
cat dogs.json | curl --request POST -d @- \
'http://localhost:8001/-/insert/data/dogs?pk=id'
If you are inserting a single row you can optionally send it as a dictionary rather than a list with a single item:
curl --request POST \
--data '{
""id"": 1,
""name"": ""Cleopaws"",
""age"": 5
}' \
'http://localhost:8001/-/insert/data/dogs?pk=id'
If you send data to an existing table with keys that are not reflected by the existing columns, you will get an HTTP 400 error with a JSON response like this:
{
""status"": 400,
""error"": ""Unknown keys: 'foo'"",
""error_code"": ""unknown_keys""
}If you add ?alter=1 to the URL you are posting to any missing columns will be automatically added:
curl --request POST \
--data '[
{
""id"": 3,
""name"": ""Boris"",
""age"": 1,
""breed"": ""Husky""
}
]' \
'http://localhost:8001/-/insert/data/dogs?alter=1'
An ""upsert"" operation can be used to partially update a record. With upserts you can send a subset of the keys and, if the ID matches the specified primary key, they will be used to update an existing record.
Upserts can be sent to the /-/upsert API endpoint.
This example will update the dog with ID=1's age from 5 to 7:
curl --request POST \
--data '{
""id"": 1,
""age"": 7
}' \
'http://localhost:3322/-/upsert/data/dogs?pk=id'
Like the /-/insert endpoint, the /-/upsert endpoint can accept an array of objects too. It also supports the ?alter=1 option.
This plugin defaults to denying all access, to help ensure people don't accidentally deploy it on the open internet in an unsafe configuration.
You can read about Datasette's approach to authentication in the Datasette manual.
You can install the datasette-insert-unsafe plugin to run in unsafe mode, where all access is allowed by default.
I recommend using this plugin in conjunction with datasette-auth-tokens, which provides a mechanism for making authenticated calls using API tokens.
You can then use ""allow"" blocks in the datasette-insert plugin configuration to specify which authenticated tokens are allowed to make use of the API.
Here's an example metadata.json file which restricts access to the /-/insert API to an API token defined in an INSERT_TOKEN environment variable:
{
""plugins"": {
""datasette-insert"": {
""allow"": {
""bot"": ""insert-bot""
}
},
""datasette-auth-tokens"": {
""tokens"": [
{
""token"": {
""$env"": ""INSERT_TOKEN""
},
""actor"": {
""bot"": ""insert-bot""
}
}
]
}
}
}With this configuration in place you can start Datasette like this:
INSERT_TOKEN=abc123 datasette data.db -m metadata.json
You can now send data to the API using curl like this:
curl --request POST \
-H ""Authorization: Bearer abc123"" \
--data '[
{
""id"": 3,
""name"": ""Boris"",
""age"": 1,
""breed"": ""Husky""
}
]' \
'http://localhost:8001/-/insert/data/dogs'
Or using the Python requests library like so:
requests.post( ""http://localhost:8001/-/insert/data/dogs"", json={""id"": 1, ""name"": ""Cleopaws"", ""age"": 5}, headers={""Authorization"": ""bearer abc123""}, )
Using an ""allow"" block as described above grants full permission to the features enabled by the API.
The API implements several new Datasett permissions, which other plugins can use to make more finely grained decisions.
The full set of permissions are as follows:
insert:all - all permissions - this is used by the ""allow"" block described above. Argument: database_nameinsert:insert-update - the ability to insert data into an existing table, or to update data by its primary key. Arguments: (database_name, table_name)insert:create-table - the ability to create a new table. Argument: database_nameinsert:alter-table - the ability to add columns to an existing table (using ?alter=1). Arguments: (database_name, table_name)You can use plugins like datasette-permissions-sql to hook into these more detailed permissions for finely grained control over what actions each authenticated actor can take.
Plugins that implement the permission_allowed() plugin hook can take full control over these permission decisions.
If you start Datasette with the datasette --cors option the following HTTP headers will be added to resources served by this plugin:
Access-Control-Allow-Origin: *
Access-Control-Allow-Headers: content-type,authorization
Access-Control-Allow-Methods: POST
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-insert
python3 -m venv venv
source venv/bin/activate
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
Export Datasette records as YAML
Install this plugin in the same environment as Datasette.
$ datasette install datasette-yaml
Having installed this plugin, every table and query will gain a new .yaml export link.
You can also construct these URLs directly: /dbname/tablename.yaml
The plugin is running on covid-19.datasettes.com - for example /covid/latest_ny_times_counties_with_populations.yaml
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-yaml
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
Attraction name:
``` Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the `?_raw=column-name` parameter. This can be passed multiple times. Consider [this example query](https://latest-with-plugins.datasette.io/github?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B): ```sql select '#' || substr(sha, 0, 6) as [custom-bg] from commits order by author_date desc limit 1; ``` This returns the first 6 characters of the most recently authored commit with a `#` prefix. The `.css` [output rendered version](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B) looks like this: ```css :root { --custom-bg: '#97fb1'; } ``` Adding `?_raw=custom-bg` to the URL produces [this instead](https://latest-with-plugins.datasette.io/github.css?sql=select%0D%0A++%27%23%27+||+substr(sha%2C+0%2C+6)+as+[custom-bg]%0D%0Afrom%0D%0A++commits%0D%0Aorder+by%0D%0A++author_date+desc%0D%0Alimit%0D%0A++1%3B&_raw=custom-bg): ```css :root { --custom-bg: #97fb1; } ``` This can then be used as a color value like so: ```css h1 { background-color: var(--custom-bg); } ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-css-properties python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","Extremely experimental Datasette output plugin using CSS properties, inspired by Custom Properties as State by Chris Coyier.
More about this project: APIs from CSS without JavaScript: the datasette-css-properties plugin
Install this plugin in the same environment as Datasette.
$ datasette install datasette-css-properties
Once installed, this plugin adds a .css output format to every query result. This will return the first row in the query as a valid CSS file, defining each column as a custom property:
Example: https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css produces:
:root { --pk: '1'; --name: 'The Mystery Spot'; --address: '465 Mystery Spot Road, Santa Cruz, CA 95065'; --latitude: '37.0167'; --longitude: '-122.0024'; }
If you link this stylesheet to your page you can then do things like this;
<link rel=""stylesheet"" href=""https://latest-with-plugins.datasette.io/fixtures/roadside_attractions.css""> <style> .attraction-name:after { content: var(--name); } </style> <p class=""attraction-name"">Attraction name: </p>
Values will be quoted as CSS strings by default. If you want to return a ""raw"" value without the quotes - for example to set a CSS property that is numeric or a color, you can specify that column name using the ?_raw=column-name parameter. This can be passed multiple times.
Consider this example query:
select '#' || substr(sha, 0, 6) as [custom-bg] from commits order by author_date desc limit 1;
This returns the first 6 characters of the most recently authored commit with a # prefix. The .css output rendered version looks like this:
:root { --custom-bg: '#97fb1'; }
Adding ?_raw=custom-bg to the URL produces this instead:
:root { --custom-bg: #97fb1; }
This can then be used as a color value like so:
h1 { background-color: var(--custom-bg); }
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-css-properties
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
SQL functions for working with placekeys.
Install this plugin in the same environment as Datasette.
$ datasette install datasette-placekey
The following SQL functions are exposed - documentation here.
select geo_to_placekey(33.0896104,129.7900839), placekey_to_geo('@6nh-nhh-kvf'), placekey_to_geo_latitude('@6nh-nhh-kvf'), placekey_to_geo_longitude('@6nh-nhh-kvf'), placekey_to_h3('@6nh-nhh-kvf'), h3_to_placekey('8a30d94e4c87fff'), placekey_to_geojson('@6nh-nhh-kvf'), placekey_to_wkt('@6nh-nhh-kvf'), placekey_format_is_valid('@6nh-nhh-kvf');
To set up this plugin locally, first checkout the code. Then create a new virtual environment:
cd datasette-placekey
python3 -mvenv venv
source venv/bin/activate
Or if you are using pipenv:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest