id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions 293361514,MDEwOlJlcG9zaXRvcnkyOTMzNjE1MTQ=,geocode-sqlite,eyeseast/geocode-sqlite,0,25778,https://github.com/eyeseast/geocode-sqlite,Geocode rows in a SQLite database table,0,2020-09-06T21:05:39Z,2022-11-02T19:19:56Z,2022-11-07T17:31:05Z,,125,223,223,Python,1,1,1,1,0,6,0,0,8,apache-2.0,[],6,8,223,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,6,5,"# geocode-sqlite [![PyPI](https://img.shields.io/pypi/v/geocode-sqlite.svg)](https://pypi.org/project/geocode-sqlite/) [![Changelog](https://img.shields.io/github/v/release/eyeseast/geocode-sqlite?include_prereleases&label=changelog)](https://github.com/eyeseast/geocode-sqlite/releases) [![Tests](https://github.com/eyeseast/geocode-sqlite/workflows/Test/badge.svg)](https://github.com/eyeseast/geocode-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/eyeseast/geocode-sqlite/blob/master/LICENSE) Geocode rows from a SQLite table ## Installation Install this tool using `pip` or `pipx`: ```sh # install inside a virtualenv pip install geocode-sqlite # install globally pipx install geocode-sqlite ``` ## Usage Let's say you have a spreadsheet with addresses in it, and you'd like to map those locations. First, create a SQLite database and insert rows from that spreadsheet using `sqlite-utils`. ```sh sqlite-utils insert data.db data data.csv --csv ``` Now, geocode it using OpenStreetMap's Nominatim geocoder. ```sh geocode-sqlite nominatim data.db data \ --location=""{address}, {city}, {state} {zip}"" \ --delay=1 \ --user-agent=""this-is-me"" ``` In the command above, you're using Nominatim, which is free and only asks for a unique user agent (`--user-agent`). This will connect to a database (`data.db`) and read all rows from the table `data` (skipping any that already have both a `latitude` and `longitude` column filled). You're also telling the geocoder how to extract a location query (`--location`) from a row of data, using Python's built-in string formatting, and setting a rate limit (`--delay`) of one request per second. For each row where geocoding succeeds, `latitude` and `longitude` will be populated. If you hit an error, or a rate limit, run the same query and pick up where you left off. The resulting table layout can be visualized with [datasette-cluster-map](https://datasette.io/plugins/datasette-cluster-map). Under the hood, this package uses the excellent [geopy](https://geopy.readthedocs.io/en/latest/) library, which is stable and thoroughly road-tested. If you need help understanding a particular geocoder's options, consult [geopy's documentation](https://geopy.readthedocs.io/en/latest/#module-geopy.geocoders). ### Supported Geocoders The CLI currently supports these geocoders: - `bing` - `googlev3` - `mapquest` (and `open-mapquest`) - `mapbox` - `nominatim` - `opencage` #### Adding new geocoders 1. Open an issue with the name of the geocoding service as the ticket title ([example](https://github.com/eyeseast/geocode-sqlite/issues/35)). Put any noteworthy implementation details in the ticket body, like where to get an API key if one is required. 2. Fork the repo and add a geocoder. 3. Add an example to the `Makefile`. Add tests if there's new shared functionality. ### Common arguments and options Each geocoder needs to know where to find the data it's working with. These are the first two arguments: - `database`: a path to a SQLite file, which must already exist - `table`: the name of a table, in that database, which exists and has data to geocode From there, we have a set of options passed to every geocoder: - `location`: a [string format](https://docs.python.org/3/library/stdtypes.html#str.format) that will be expanded with each row to build a full query, to be geocoded - `delay`: a delay between each call (some services require this) - `latitude`: latitude column name - `longitude`: longitude column name - `geojson`: store results as GeoJSON, instead of in latitude and longitude columns - `spatialite`: store results in a SpatiaLite geometry column, instead of in latitude and longitude columns - `raw`: store raw geocoding results in a JSON column Each geocoder takes additional, specific arguments beyond these, such as API keys. Again, [geopy's documentation](https://geopy.readthedocs.io/en/latest/#module-geopy.geocoders) is an excellent resource. ## Using SpatiaLite The `--spatialite` flag will store results in a [geometry column](https://www.gaia-gis.it/gaia-sins/spatialite-cookbook-5/cookbook_topics.adminstration.html#topic_TABLE_to_SpatialTable), instead of `latitude` and `longitude` columns. This is useful if you're doing other GIS operations, such as using a [spatial index](https://www.gaia-gis.it/fossil/libspatialite/wiki?name=SpatialIndex). See the [SpatiaLite cookbook](https://www.gaia-gis.it/gaia-sins/spatialite-cookbook-5/index.html) and [functions list](https://www.gaia-gis.it/gaia-sins/spatialite-sql-latest.html) for more of what's possible. ## Capturing additional geocoding data Geocoding services typically return more data than just coordinates. This might include accuracy, normalized addresses or other context. This can be captured using the `--raw` flag. By default, this will add a `raw` column and store the full geocoding response as JSON. If you want to rename that column, pass a value, like `--raw custom_raw`. The shape of this response object will vary between services. You can query specific values using [SQLite's built-in JSON functions](https://www.sqlite.org/json1.html). For example, this will work with Google's geocoder: ```sql select json_extract(raw, '$.formatted_address') as address, json_extract(raw, '$.geometry.location_type') as location_type from innout_test ``` Check each geocoding service's documentation for what's included in the response. ## Python API The command line interface aims to support the most common options for each geocoder. For more fine-grained control, use the Python API. As with the CLI, this assumes you already have a SQLite database and a table of location data. ```python from geocode_sqlite import geocode_table from geopy.geocoders import Nominatim # create a geocoder instance, with some extra options nominatim = Nominatim(user_agent=""this-is-me"", domain=""nominatim.local.dev"", scheme=""http"") # assuming our database is in the same directory count = geocode_table(""data.db"", ""data"", query_template=""{address}, {city}, {state} {zip}"") # when it's done print(f""Geocoded {count} rows"") ``` Any [geopy geocoder](https://geopy.readthedocs.io/en/latest/#module-geopy.geocoders) can be used with the Python API. ## Development To contribute to this tool, first checkout the code. Then create a new virtual environment: ```sh cd geocode-sqlite python -m venv .venv source .venv/bin/activate ``` Or if you are using `pipenv`: ```sh pipenv shell ``` Now install the dependencies and tests: ```sh pip install -e '.[test]' ``` To run the tests: ```sh pytest ``` Please remember that this library is mainly glue code between other well-tested projects, specifically: [click](https://click.palletsprojects.com/), [geopy](https://geopy.readthedocs.io/en/stable/) and [sqlite-utils](https://sqlite-utils.datasette.io/en/stable/). Tests should focus on making sure those parts fit together correctly. We can assume the parts themselves already work. To that end, there is a test geocoder included: `geocode_sqlite.testing.DummyGeocoder`. That geocoder works with an included dataset of In-N-Out Burger locations provided by [AllThePlaces](https://www.alltheplaces.xyz/). It works like a normal GeoPy geocoder, except it will only return results for In-N-Out locations using the included database. ","

geocode-sqlite

Geocode rows from a SQLite table

Installation

Install this tool using pip or pipx:

# install inside a virtualenv
pip install geocode-sqlite

# install globally
pipx install geocode-sqlite

Usage

Let's say you have a spreadsheet with addresses in it, and you'd like to map those locations. First, create a SQLite database and insert rows from that spreadsheet using sqlite-utils.

sqlite-utils insert data.db data data.csv --csv

Now, geocode it using OpenStreetMap's Nominatim geocoder.

geocode-sqlite nominatim data.db data \
 --location=""{address}, {city}, {state} {zip}"" \
 --delay=1 \
 --user-agent=""this-is-me""

In the command above, you're using Nominatim, which is free and only asks for a unique user agent (--user-agent).

This will connect to a database (data.db) and read all rows from the table data (skipping any that already have both a latitude and longitude column filled).

You're also telling the geocoder how to extract a location query (--location) from a row of data, using Python's built-in string formatting, and setting a rate limit (--delay) of one request per second.

For each row where geocoding succeeds, latitude and longitude will be populated. If you hit an error, or a rate limit, run the same query and pick up where you left off.

The resulting table layout can be visualized with datasette-cluster-map.

Under the hood, this package uses the excellent geopy library, which is stable and thoroughly road-tested. If you need help understanding a particular geocoder's options, consult geopy's documentation.

Supported Geocoders

The CLI currently supports these geocoders:

Adding new geocoders

  1. Open an issue with the name of the geocoding service as the ticket title (example). Put any noteworthy implementation details in the ticket body, like where to get an API key if one is required.
  2. Fork the repo and add a geocoder.
  3. Add an example to the Makefile. Add tests if there's new shared functionality.

Common arguments and options

Each geocoder needs to know where to find the data it's working with. These are the first two arguments:

From there, we have a set of options passed to every geocoder:

Each geocoder takes additional, specific arguments beyond these, such as API keys. Again, geopy's documentation is an excellent resource.

Using SpatiaLite

The --spatialite flag will store results in a geometry column, instead of latitude and longitude columns. This is useful if you're doing other GIS operations, such as using a spatial index. See the SpatiaLite cookbook and functions list for more of what's possible.

Capturing additional geocoding data

Geocoding services typically return more data than just coordinates. This might include accuracy, normalized addresses or other context. This can be captured using the --raw flag. By default, this will add a raw column and store the full geocoding response as JSON. If you want to rename that column, pass a value, like --raw custom_raw.

The shape of this response object will vary between services. You can query specific values using SQLite's built-in JSON functions. For example, this will work with Google's geocoder:

select
  json_extract(raw, '$.formatted_address') as address,
  json_extract(raw, '$.geometry.location_type') as location_type
from
  innout_test

Check each geocoding service's documentation for what's included in the response.

Python API

The command line interface aims to support the most common options for each geocoder. For more fine-grained control, use the Python API.

As with the CLI, this assumes you already have a SQLite database and a table of location data.

from geocode_sqlite import geocode_table
from geopy.geocoders import Nominatim

# create a geocoder instance, with some extra options
nominatim = Nominatim(user_agent=""this-is-me"", domain=""nominatim.local.dev"", scheme=""http"")

# assuming our database is in the same directory
count = geocode_table(""data.db"", ""data"", query_template=""{address}, {city}, {state} {zip}"")

# when it's done
print(f""Geocoded {count} rows"")

Any geopy geocoder can be used with the Python API.

Development

To contribute to this tool, first checkout the code. Then create a new virtual environment:

cd geocode-sqlite
python -m venv .venv
source .venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest

Please remember that this library is mainly glue code between other well-tested projects, specifically: click, geopy and sqlite-utils. Tests should focus on making sure those parts fit together correctly. We can assume the parts themselves already work.

To that end, there is a test geocoder included: geocode_sqlite.testing.DummyGeocoder. That geocoder works with an included dataset of In-N-Out Burger locations provided by AllThePlaces. It works like a normal GeoPy geocoder, except it will only return results for In-N-Out locations using the included database.

",1,public,0,,0,0 374846311,MDEwOlJlcG9zaXRvcnkzNzQ4NDYzMTE=,datasette-geojson,eyeseast/datasette-geojson,0,25778,https://github.com/eyeseast/datasette-geojson,Add GeoJSON output to Datasette queries,0,2021-06-08T01:33:19Z,2022-02-16T19:59:42Z,2022-02-16T20:02:49Z,,1102,3,3,Python,1,1,1,1,0,1,0,0,3,,"[""datasette-io"", ""datasette-plugin"", ""geojson"", ""gis"", ""sqlite""]",1,3,3,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,1,1,"# datasette-geojson [![PyPI](https://img.shields.io/pypi/v/datasette-geojson.svg)](https://pypi.org/project/datasette-geojson/) [![Changelog](https://img.shields.io/github/v/release/eyeseast/datasette-geojson?include_prereleases&label=changelog)](https://github.com/eyeseast/datasette-geojson/releases) [![Tests](https://github.com/eyeseast/datasette-geojson/workflows/Test/badge.svg)](https://github.com/eyeseast/datasette-geojson/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/eyeseast/datasette-geojson/blob/main/LICENSE) Add GeoJSON as an output option for datasette queries. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-geojson ## Usage To render GeoJSON, add a `.geojson` extension to any query URL that includes a `geometry` column. That column should be a valid [GeoJSON geometry](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1). For example, you might use [geojson-to-sqlite](https://pypi.org/project/geojson-to-sqlite/) or [shapefile-to-sqlite](https://pypi.org/project/shapefile-to-sqlite/) to load [neighborhood boundaries](https://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0/explore) into a SQLite database. ```sh wget -O neighborhoods.geojson https://opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson geojson-to-sqlite boston.db neighborhoods neighborhoods.geojson --spatial-index # create a spatial index datasette serve boston.db --load-extension spatialite ``` If you're using Spatialite, the geometry column will be in a binary format. If not, make sure the `geometry` column is a well-formed [GeoJSON geometry](https://datatracker.ietf.org/doc/html/rfc7946#section-3.1). If you used `geojson-to-sqlite` or `shapefile-to-sqlite`, you should be all set. Run this query in Datasette and you'll see a link to download GeoJSON: ```sql select rowid, OBJECTID, Name, Acres, Neighborhood_ID, SqMiles, ShapeSTArea, ShapeSTLength, geometry from neighborhoods order by rowid limit 101 ``` Note that the geometry column needs to be explicitly _named_ `geometry` or you won't get the option to export GeoJSON. If you want to use a different column, rename it with `AS`: `SELECT other AS geometry FROM my_table`. ![export geojson](img/export-options.png) ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-geojson python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-geojson

Add GeoJSON as an output option for datasette queries.

Installation

Install this plugin in the same environment as Datasette.

datasette install datasette-geojson

Usage

To render GeoJSON, add a .geojson extension to any query URL that includes a geometry column. That column should be a valid GeoJSON geometry.

For example, you might use geojson-to-sqlite or shapefile-to-sqlite to load neighborhood boundaries into a SQLite database.

wget -O neighborhoods.geojson https://opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson
geojson-to-sqlite boston.db neighborhoods neighborhoods.geojson --spatial-index # create a spatial index
datasette serve boston.db --load-extension spatialite

If you're using Spatialite, the geometry column will be in a binary format. If not, make sure the geometry column is a well-formed GeoJSON geometry. If you used geojson-to-sqlite or shapefile-to-sqlite, you should be all set.

Run this query in Datasette and you'll see a link to download GeoJSON:

select
  rowid,
  OBJECTID,
  Name,
  Acres,
  Neighborhood_ID,
  SqMiles,
  ShapeSTArea,
  ShapeSTLength,
  geometry
from
  neighborhoods
order by
  rowid
limit
  101

Note that the geometry column needs to be explicitly named geometry or you won't get the option to export GeoJSON. If you want to use a different column, rename it with AS: SELECT other AS geometry FROM my_table.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-geojson
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",1,public,0,,, 382986564,MDEwOlJlcG9zaXRvcnkzODI5ODY1NjQ=,datasette-geojson-map,eyeseast/datasette-geojson-map,0,25778,https://github.com/eyeseast/datasette-geojson-map,Render a map for any query with a geometry column,0,2021-07-05T01:54:13Z,2022-03-04T00:16:17Z,2022-04-27T20:39:47Z,,3651,9,9,Python,1,1,1,1,0,0,0,0,10,,"[""datasette-io"", ""datasette-plugin"", ""geojson"", ""gis"", ""leafletjs"", ""mapping""]",0,10,9,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-geojson-map [![PyPI](https://img.shields.io/pypi/v/datasette-geojson-map.svg)](https://pypi.org/project/datasette-geojson-map/) [![Changelog](https://img.shields.io/github/v/release/eyeseast/datasette-geojson-map?include_prereleases&label=changelog)](https://github.com/eyeseast/datasette-geojson-map/releases) [![Tests](https://github.com/eyeseast/datasette-geojson-map/workflows/Test/badge.svg)](https://github.com/eyeseast/datasette-geojson-map/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/eyeseast/datasette-geojson-map/blob/main/LICENSE) Render a map for any query with a geometry column ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-geojson-map ## Usage Start by loading a GIS file. For example, you might use [geojson-to-sqlite](https://pypi.org/project/geojson-to-sqlite/) or [shapefile-to-sqlite](https://pypi.org/project/shapefile-to-sqlite/) to load [neighborhood boundaries](https://bostonopendata-boston.opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0/explore) into a SQLite database. ```sh wget -O neighborhoods.geojson https://opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson geojson-to-sqlite boston.db neighborhoods neighborhoods.geojson ``` (The command above uses Spatialite, but that's not required.) Start up `datasette` and navigate to the `neighborhoods` table. ```sh datasette serve boston.db # in another terminal tab open http://localhost:8001/boston/neighborhoods ``` You should see a map centered on Boston with each neighborhood outlined. Clicking a boundary will bring up a popup with details on that feature. ![Boston neighbhorhoods map](img/boston-neighborhoods-map.png) This plugin relies on (and will install) [datasette-geojson](https://github.com/eyeseast/datasette-geojson). Any query that includes a `geometry` column will produce a map of the results. This also includes single row views. Run the included `demo` project to see it live. ## Configuration This project uses the same map configuration as [datasette-cluster-map](https://github.com/simonw/datasette-cluster-map). Here's how you would use [Stamen's terrain tiles](http://maps.stamen.com/terrain/#12/37.7706/-122.3782): ```yaml plugins: datasette-geojson-map: tile_layer: https://stamen-tiles-{s}.a.ssl.fastly.net/terrain/{z}/{x}/{y}.{ext} tile_layer_options: attribution: >- Map tiles by Stamen Design, under CC BY 3.0. Data by OpenStreetMap, under ODbL. subdomains: abcd minZoom: 1 maxZoom: 16 ext: jpg ``` Options: - `tile_layer`: Use a URL template that can be passed to a [Leaflet Tilelayer](https://leafletjs.com/reference-1.7.1.html#tilelayer) - `tile_layer_options`: All options will be passed to the tile layer. See [Leaflet documentation](https://leafletjs.com/reference-1.7.1.html#tilelayer) for more on possible values here. ## Styling map features Map features can be styled using the [simplestyle-spec](https://github.com/mapbox/simplestyle-spec). This requires setting specific fields on returned rows. Here's an example: ```sql SELECT Name, geometry, ""#ff0000"" as fill, ""#0000ff"" as stroke, 0.2 as stroke-width, from neighborhoods ``` That will render a neighborhood map where each polygon is filled in red, outlined in blue and lines are 0.2 pixels wide. A more useful approach would use the `CASE` statement to color features based on data: ```sql SELECT Name, geometry, CASE Name WHEN ""Roslindale"" THEN ""#ff0000"" WHEN ""Dorchester"" THEN ""#0000ff"" ELSE ""#dddddd"" END fill FROM neighborhoods ``` This will fill Roslindale in red, Dorchester in blue and all other neighborhoods in gray. ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-geojson-map python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ","

datasette-geojson-map

Render a map for any query with a geometry column

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-geojson-map

Usage

Start by loading a GIS file.

For example, you might use geojson-to-sqlite or shapefile-to-sqlite to load neighborhood boundaries into a SQLite database.

wget -O neighborhoods.geojson https://opendata.arcgis.com/datasets/3525b0ee6e6b427f9aab5d0a1d0a1a28_0.geojson
geojson-to-sqlite boston.db neighborhoods neighborhoods.geojson

(The command above uses Spatialite, but that's not required.)

Start up datasette and navigate to the neighborhoods table.

datasette serve boston.db

# in another terminal tab
open http://localhost:8001/boston/neighborhoods

You should see a map centered on Boston with each neighborhood outlined. Clicking a boundary will bring up a popup with details on that feature.

This plugin relies on (and will install) datasette-geojson. Any query that includes a geometry column will produce a map of the results. This also includes single row views.

Run the included demo project to see it live.

Configuration

This project uses the same map configuration as datasette-cluster-map. Here's how you would use Stamen's terrain tiles:

plugins:
  datasette-geojson-map:
    tile_layer: https://stamen-tiles-{s}.a.ssl.fastly.net/terrain/{z}/{x}/{y}.{ext}
    tile_layer_options:
      attribution: >-
        Map tiles by <a href=""http://stamen.com"">Stamen Design</a>, 
        under <a href=""http://creativecommons.org/licenses/by/3.0"">CC BY 3.0</a>. 
        Data by <a href=""http://openstreetmap.org"">OpenStreetMap</a>, 
        under <a href=""http://www.openstreetmap.org/copyright"">ODbL</a>.
      subdomains: abcd
      minZoom: 1
      maxZoom: 16
      ext: jpg

Options:

Styling map features

Map features can be styled using the simplestyle-spec. This requires setting specific fields on returned rows. Here's an example:

SELECT Name, geometry, ""#ff0000"" as fill, ""#0000ff"" as stroke, 0.2 as stroke-width,  from neighborhoods

That will render a neighborhood map where each polygon is filled in red, outlined in blue and lines are 0.2 pixels wide.

A more useful approach would use the CASE statement to color features based on data:

SELECT
  Name,
  geometry,
  CASE
    Name
    WHEN ""Roslindale"" THEN ""#ff0000""
    WHEN ""Dorchester"" THEN ""#0000ff""
    ELSE ""#dddddd""
  END fill
FROM
  neighborhoods

This will fill Roslindale in red, Dorchester in blue and all other neighborhoods in gray.

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-geojson-map
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest
",1,public,0,,, 461322238,R_kgDOG383_g,sqlite-colorbrewer,eyeseast/sqlite-colorbrewer,0,25778,https://github.com/eyeseast/sqlite-colorbrewer,A custom function to use ColorBrewer scales in SQLite queries,0,2022-02-19T21:53:46Z,2022-03-03T17:16:40Z,2022-03-02T03:04:56Z,,19,4,4,Python,1,1,1,1,0,0,0,0,0,apache-2.0,[],0,0,4,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# sqlite-colorbrewer [![PyPI](https://img.shields.io/pypi/v/sqlite-colorbrewer.svg)](https://pypi.org/project/sqlite-colorbrewer/) [![Changelog](https://img.shields.io/github/v/release/eyeseast/sqlite-colorbrewer?include_prereleases&label=changelog)](https://github.com/eyeseast/sqlite-colorbrewer/releases) [![Tests](https://github.com/eyeseast/sqlite-colorbrewer/workflows/Test/badge.svg)](https://github.com/eyeseast/sqlite-colorbrewer/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/eyeseast/sqlite-colorbrewer/blob/main/LICENSE) A custom function to use [ColorBrewer](https://colorbrewer2.org/) scales in SQLite queries. Colors are exported from [here](https://colorbrewer2.org/export/colorbrewer.json). ## Installation To install as a Python library and use with the [standard SQLite3 module](https://docs.python.org/3/library/sqlite3.html): pip install sqlite-colorbrewer To install this plugin in the same environment as Datasette. datasette install sqlite-colorbrewer ## Usage If you're using this library with Datasette, it will be automatically registered as a plugin and available for use in SQL queries, like so: ```sql SELECT colorbrewer('Blues', 9, 0); ``` That will return a single value: `""rgb(247,251,255)""` To use with a SQLite connection outside of Datasette, use the `register` function: ```python >>> import sqlite3 >>> import sqlite_colorbrewer >>> conn = sqlite3.connect(':memory') >>> sqlite_colorbrewer.register(conn) >>> cursor = conn.execute(""SELECT colorbrewer('Blues', 9, 0);"") >>> result = next(cursor) >>> print(result) rgb(247,251,255) ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd sqlite-colorbrewer python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest To build `sqlite_colorbrewer/colorbrewer.py`: ./json_to_python.py black . # to format the resulting file ## ColorBrewer Copyright (c) 2002 Cynthia Brewer, Mark Harrower, and The Pennsylvania State University. Licensed under the Apache License, Version 2.0 (the ""License""); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an ""AS IS"" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See the [ColorBrewer updates](http://www.personal.psu.edu/cab38/ColorBrewer/ColorBrewer_updates.html) for updates to copyright information. ","

sqlite-colorbrewer

A custom function to use ColorBrewer scales in SQLite queries.

Colors are exported from here.

Installation

To install as a Python library and use with the standard SQLite3 module:

pip install sqlite-colorbrewer

To install this plugin in the same environment as Datasette.

datasette install sqlite-colorbrewer

Usage

If you're using this library with Datasette, it will be automatically registered as a plugin and available for use in SQL queries, like so:

SELECT colorbrewer('Blues', 9, 0);

That will return a single value: ""rgb(247,251,255)""

To use with a SQLite connection outside of Datasette, use the register function:

>>> import sqlite3
>>> import sqlite_colorbrewer

>>> conn = sqlite3.connect(':memory')
>>> sqlite_colorbrewer.register(conn)

>>> cursor = conn.execute(""SELECT colorbrewer('Blues', 9, 0);"")
>>> result = next(cursor)
>>> print(result)
rgb(247,251,255)

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd sqlite-colorbrewer
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest

To build sqlite_colorbrewer/colorbrewer.py:

./json_to_python.py
black . # to format the resulting file

ColorBrewer

Copyright (c) 2002 Cynthia Brewer, Mark Harrower, and The Pennsylvania State University.

Licensed under the Apache License, Version 2.0 (the ""License""); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an ""AS IS"" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

See the ColorBrewer updates for updates to copyright information.

",1,public,0,,, 499911426,R_kgDOHcwLAg,datasette-query-files,eyeseast/datasette-query-files,0,25778,https://github.com/eyeseast/datasette-query-files,Write Datasette canned queries as plain SQL files,0,2022-06-04T18:52:07Z,2022-07-02T19:46:52Z,2022-07-02T20:40:51Z,,24,8,8,Python,1,1,1,1,0,0,0,0,2,apache-2.0,"[""datasette"", ""datasette-plugin"", ""python"", ""sql"", ""sqlite""]",0,2,8,main,"{""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}",,,0,1,"# datasette-query-files [![PyPI](https://img.shields.io/pypi/v/datasette-query-files.svg)](https://pypi.org/project/datasette-query-files/) [![Changelog](https://img.shields.io/github/v/release/eyeseast/datasette-query-files?include_prereleases&label=changelog)](https://github.com/eyeseast/datasette-query-files/releases) [![Tests](https://github.com/eyeseast/datasette-query-files/workflows/Test/badge.svg)](https://github.com/eyeseast/datasette-query-files/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/eyeseast/datasette-query-files/blob/main/LICENSE) Write Datasette canned queries as plain SQL files. ## Installation Install this plugin in the same environment as Datasette. datasette install datasette-query-files Or using `pip` or `pipenv`: pip install datasette-query-files pipenv install datasette-query-files ## Usage This plugin will look for [canned queries](https://docs.datasette.io/en/stable/sql_queries.html#canned-queries) in the filesystem, in addition any defined in metadata. Let's say you're working in a directory called `project-directory`, with a database file called `my-project.db`. Start by creating a `queries` directory with a `my-project` directory inside it. Any SQL file inside that `my-project` folder will become a canned query that can be run on the `my-project` database. If you have a `query-name.sql` file and a `query-name.json` (or `query-name.yml`) file in the same directory, the JSON file will be used as query metadata. ``` project-directory/ my-project.db queries/ my-project/ query-name.sql # a query query-name.yml # query metadata ``` ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-query-files python3 -m venv venv source venv/bin/activate Now install the dependencies and test dependencies: pip install -e '.[test]' To run the tests: pytest ","

datasette-query-files

Write Datasette canned queries as plain SQL files.

Installation

Install this plugin in the same environment as Datasette.

datasette install datasette-query-files

Or using pip or pipenv:

pip install datasette-query-files
pipenv install datasette-query-files

Usage

This plugin will look for canned queries in the filesystem, in addition any defined in metadata.

Let's say you're working in a directory called project-directory, with a database file called my-project.db. Start by creating a queries directory with a my-project directory inside it. Any SQL file inside that my-project folder will become a canned query that can be run on the my-project database. If you have a query-name.sql file and a query-name.json (or query-name.yml) file in the same directory, the JSON file will be used as query metadata.

project-directory/
  my-project.db
  queries/
    my-project/
      query-name.sql # a query
      query-name.yml # query metadata

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-query-files
python3 -m venv venv
source venv/bin/activate

Now install the dependencies and test dependencies:

pip install -e '.[test]'

To run the tests:

pytest
",1,public,0,"{""id"": 400878073, ""node_id"": ""MDEwOlJlcG9zaXRvcnk0MDA4NzgwNzM="", ""name"": ""datasette-plugin-template-repository"", ""full_name"": ""simonw/datasette-plugin-template-repository"", ""private"": false, ""owner"": {""login"": ""simonw"", ""id"": 9599, ""node_id"": ""MDQ6VXNlcjk1OTk="", ""avatar_url"": ""https://avatars.githubusercontent.com/u/9599?v=4"", ""gravatar_id"": """", ""url"": ""https://api.github.com/users/simonw"", ""html_url"": ""https://github.com/simonw"", ""followers_url"": ""https://api.github.com/users/simonw/followers"", ""following_url"": ""https://api.github.com/users/simonw/following{/other_user}"", ""gists_url"": ""https://api.github.com/users/simonw/gists{/gist_id}"", ""starred_url"": ""https://api.github.com/users/simonw/starred{/owner}{/repo}"", ""subscriptions_url"": ""https://api.github.com/users/simonw/subscriptions"", ""organizations_url"": ""https://api.github.com/users/simonw/orgs"", ""repos_url"": ""https://api.github.com/users/simonw/repos"", ""events_url"": ""https://api.github.com/users/simonw/events{/privacy}"", ""received_events_url"": ""https://api.github.com/users/simonw/received_events"", ""type"": ""User"", ""site_admin"": false}, ""html_url"": ""https://github.com/simonw/datasette-plugin-template-repository"", ""description"": ""GitHub template repository for creating new Datasette plugins, using the simonw/datasette-plugin cookiecutter template"", ""fork"": false, ""url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository"", ""forks_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/forks"", ""keys_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/keys{/key_id}"", ""collaborators_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/collaborators{/collaborator}"", ""teams_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/teams"", ""hooks_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/hooks"", ""issue_events_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/issues/events{/number}"", ""events_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/events"", ""assignees_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/assignees{/user}"", ""branches_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/branches{/branch}"", ""tags_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/tags"", ""blobs_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/blobs{/sha}"", ""git_tags_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/tags{/sha}"", ""git_refs_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/refs{/sha}"", ""trees_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/trees{/sha}"", ""statuses_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/statuses/{sha}"", ""languages_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/languages"", ""stargazers_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/stargazers"", ""contributors_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/contributors"", ""subscribers_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/subscribers"", ""subscription_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/subscription"", ""commits_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/commits{/sha}"", ""git_commits_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/git/commits{/sha}"", ""comments_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/comments{/number}"", ""issue_comment_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/issues/comments{/number}"", ""contents_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/contents/{+path}"", ""compare_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/compare/{base}...{head}"", ""merges_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/merges"", ""archive_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/{archive_format}{/ref}"", ""downloads_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/downloads"", ""issues_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/issues{/number}"", ""pulls_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/pulls{/number}"", ""milestones_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/milestones{/number}"", ""notifications_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/notifications{?since,all,participating}"", ""labels_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/labels{/name}"", ""releases_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/releases{/id}"", ""deployments_url"": ""https://api.github.com/repos/simonw/datasette-plugin-template-repository/deployments"", ""created_at"": ""2021-08-28T19:50:28Z"", ""updated_at"": ""2022-06-10T13:28:46Z"", ""pushed_at"": ""2022-03-16T23:42:16Z"", ""git_url"": ""git://github.com/simonw/datasette-plugin-template-repository.git"", ""ssh_url"": ""git@github.com:simonw/datasette-plugin-template-repository.git"", ""clone_url"": ""https://github.com/simonw/datasette-plugin-template-repository.git"", ""svn_url"": ""https://github.com/simonw/datasette-plugin-template-repository"", ""homepage"": """", ""size"": 9, ""stargazers_count"": 15, ""watchers_count"": 15, ""language"": null, ""has_issues"": true, ""has_projects"": true, ""has_downloads"": true, ""has_wiki"": true, ""has_pages"": false, ""forks_count"": 0, ""mirror_url"": null, ""archived"": false, ""disabled"": false, ""open_issues_count"": 0, ""license"": null, ""allow_forking"": true, ""is_template"": true, ""web_commit_signoff_required"": false, ""topics"": [], ""visibility"": ""public"", ""forks"": 0, ""open_issues"": 0, ""watchers"": 15, ""default_branch"": ""main"", ""permissions"": {""admin"": false, ""maintain"": false, ""push"": false, ""triage"": false, ""pull"": false}, ""temp_clone_token"": """"}",0,