id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions 168474970,MDEwOlJlcG9zaXRvcnkxNjg0NzQ5NzA=,dbf-to-sqlite,simonw/dbf-to-sqlite,0,9599,https://github.com/simonw/dbf-to-sqlite,"CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite",0,2019-01-31T06:30:46Z,2021-03-23T01:29:41Z,2020-02-16T00:41:20Z,,8,25,25,Python,1,1,1,1,0,8,0,0,3,apache-2.0,"[""sqlite"", ""foxpro"", ""dbf"", ""dbase"", ""datasette-io"", ""datasette-tool""]",8,3,25,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,8,2,"# dbf-to-sqlite [![PyPI](https://img.shields.io/pypi/v/dbf-to-sqlite.svg)](https://pypi.python.org/pypi/dbf-to-sqlite) [![Travis CI](https://travis-ci.com/simonw/dbf-to-sqlite.svg?branch=master)](https://travis-ci.com/simonw/dbf-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/dbf-to-sqlite/blob/master/LICENSE) CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite. ## Installation pip install dbf-to-sqlite ## Usage $ dbf-to-sqlite --help Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB Convert DBF files (dBase, FoxPro etc) to SQLite https://github.com/simonw/dbf-to-sqlite Options: --version Show the version and exit. --table TEXT Table name to use (only valid for single files) -v, --verbose Show what's going on --help Show this message and exit. Example usage: $ dbf-to-sqlite *.DBF database.db This will create a new SQLite database called `database.db` containing one table for each of the `DBF` files in the current directory. Looking for DBF files to try this out on? Try downloading the [Himalayan Database](http://himalayandatabase.com/) of all expeditions that have climbed in the Nepal Himalaya. ","

dbf-to-sqlite

CLI tool for converting DBF files (dBase, FoxPro etc) to SQLite.

Installation

pip install dbf-to-sqlite

Usage

$ dbf-to-sqlite --help
Usage: dbf-to-sqlite [OPTIONS] DBF_PATHS... SQLITE_DB

  Convert DBF files (dBase, FoxPro etc) to SQLite

  https://github.com/simonw/dbf-to-sqlite

Options:
  --version      Show the version and exit.
  --table TEXT   Table name to use (only valid for single files)
  -v, --verbose  Show what's going on
  --help         Show this message and exit.

Example usage:

$ dbf-to-sqlite *.DBF database.db

This will create a new SQLite database called database.db containing one table for each of the DBF files in the current directory.

Looking for DBF files to try this out on? Try downloading the Himalayan Database of all expeditions that have climbed in the Nepal Himalaya.

",,,,,, 195087137,MDEwOlJlcG9zaXRvcnkxOTUwODcxMzc=,datasette-auth-github,simonw/datasette-auth-github,0,9599,https://github.com/simonw/datasette-auth-github,Datasette plugin that authenticates users against GitHub,0,2019-07-03T16:02:53Z,2021-06-03T11:42:54Z,2021-02-25T06:40:17Z,https://datasette-auth-github-demo.datasette.io/,119,34,34,Python,1,1,1,1,0,4,0,0,3,apache-2.0,"[""asgi"", ""datasette"", ""datasette-plugin"", ""datasette-io""]",4,3,34,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,4,1,"# datasette-auth-github [![PyPI](https://img.shields.io/pypi/v/datasette-auth-github.svg)](https://pypi.org/project/datasette-auth-github/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-auth-github?include_prereleases&label=changelog)](https://github.com/simonw/datasette-auth-github/releases) [![Tests](https://github.com/simonw/datasette-auth-github/workflows/Test/badge.svg)](https://github.com/simonw/datasette-auth-github/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-auth-github/blob/main/LICENSE) Datasette plugin that authenticates users against GitHub. - [Setup instructions](#setup-instructions) - [The authenticated actor](#the-authenticated-actor) - [Restricting access to specific users](#restricting-access-to-specific-users) - [Restricting access to specific GitHub organizations or teams](#restricting-access-to-specific-github-organizations-or-teams) - [What to do if a user is removed from an organization or team](#what-to-do-if-a-user-is-removed-from-an-organization-or-team) ## Setup instructions * Install the plugin: `datasette install datasette-auth-github` * Create a GitHub OAuth app: https://github.com/settings/applications/new * Set the Authorization callback URL to `http://127.0.0.1:8001/-/github-auth-callback` * Create a `metadata.json` file with the following structure: ```json { ""title"": ""datasette-auth-github demo"", ""plugins"": { ""datasette-auth-github"": { ""client_id"": {""$env"": ""GITHUB_CLIENT_ID""}, ""client_secret"": {""$env"": ""GITHUB_CLIENT_SECRET""} } } } ``` Now you can start Datasette like this, passing in the secrets as environment variables: $ GITHUB_CLIENT_ID=XXX GITHUB_CLIENT_SECRET=YYY datasette \ fixtures.db -m metadata.json Note that hard-coding secrets in `metadata.json` is a bad idea as they will be visible to anyone who can navigate to `/-/metadata`. Instead, we use Datasette's mechanism for [adding secret plugin configuration options](https://docs.datasette.io/en/stable/plugins.html#secret-configuration-values). By default anonymous users will still be able to interact with Datasette. If you wish all users to have to sign in with a GitHub account first, add this to your ``metadata.json``: ```json { ""allow"": { ""id"": ""*"" }, ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."" } } } ``` ## The authenticated actor Visit `/-/actor` when signed in to see the shape of the authenticated actor. It should look something like this: ```json { ""actor"": { ""display"": ""simonw"", ""gh_id"": ""9599"", ""gh_name"": ""Simon Willison"", ""gh_login"": ""simonw"", ""gh_email"": ""..."", ""gh_orgs"": [ ""dogsheep"", ""datasette-project"" ], ""gh_teams"": [ ""dogsheep/test"" ] } } ``` The `gh_orgs` and `gh_teams` properties will only be present if you used `load_teams` or `load_orgs`, documented below. ## Restricting access to specific users You can use Datasette's [permissions mechanism](https://docs.datasette.io/en/stable/authentication.html) to specify which user or users are allowed to access your instance. Here's how to restrict access to just GitHub user `simonw`: ```json { ""allow"": { ""gh_login"": ""simonw"" }, ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."" } } } ``` This `""allow""` block can be positioned at the database, table or query level instead: see [Configuring permissions in metadata.json](https://docs.datasette.io/en/stable/authentication.html#configuring-permissions-in-metadata-json) for details. Note that GitHub allows users to change their username, and it is possible for other people to claim old usernames. If you are concerned that your users may change their usernames you can key the allow blocks against GitHub user IDs instead, which do not change: ```json { ""allow"": { ""gh_id"": ""9599"" } } ``` ## Restricting access to specific GitHub organizations or teams You can also restrict access to users who are members of a specific GitHub organization. You'll need to configure the plugin to check if the user is a member of that organization when they first sign in. You can do that using the `""load_orgs""` plugin configuration option. Then you can use `""allow"": {""gh_orgs"": [...]}` to specify which organizations are allowed access. ```json { ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."", ""load_orgs"": [""your-organization""] } }, ""allow"": { ""gh_orgs"": ""your-organization"" } } ``` If your organization is [arranged into teams](https://help.github.com/en/articles/organizing-members-into-teams) you can restrict access to a specific team like this: ```json { ""plugins"": { ""datasette-auth-github"": { ""..."": ""..."", ""load_teams"": [ ""your-organization/staff"", ""your-organization/engineering"", ] } }, ""allows"": { ""gh_team"": ""your-organization/engineering"" } } ``` ## What to do if a user is removed from an organization or team A user's organization and team memberships are checked once, when they first sign in. Those teams and organizations are then persisted in the user's signed `ds_actor` cookie. This means that if a user is removed from an organization or team but still has a Datasette cookie, they will still be able to access that Datasette instance. You can remedy this by rotating the `DATASETTE_SECRET` environment variable any time you make changes to your GitHub organization members. Changing this value will cause all of your existing users to be signed out, by invalidating their cookies. When they sign back in again their new memberships will be recorded in a new cookie. See [Configuring the secret](https://docs.datasette.io/en/stable/settings.html?highlight=secret#configuring-the-secret) in the Datasette documentation for more details. ","

datasette-auth-github

Datasette plugin that authenticates users against GitHub.

Setup instructions

{
    ""title"": ""datasette-auth-github demo"",
    ""plugins"": {
        ""datasette-auth-github"": {
            ""client_id"": {""$env"": ""GITHUB_CLIENT_ID""},
            ""client_secret"": {""$env"": ""GITHUB_CLIENT_SECRET""}
        }
    }
}

Now you can start Datasette like this, passing in the secrets as environment variables:

$ GITHUB_CLIENT_ID=XXX GITHUB_CLIENT_SECRET=YYY datasette \
    fixtures.db -m metadata.json

Note that hard-coding secrets in metadata.json is a bad idea as they will be visible to anyone who can navigate to /-/metadata. Instead, we use Datasette's mechanism for adding secret plugin configuration options.

By default anonymous users will still be able to interact with Datasette. If you wish all users to have to sign in with a GitHub account first, add this to your metadata.json:

{
    ""allow"": {
        ""id"": ""*""
    },
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""...""
        }
    }
}

The authenticated actor

Visit /-/actor when signed in to see the shape of the authenticated actor. It should look something like this:

{
    ""actor"": {
        ""display"": ""simonw"",
        ""gh_id"": ""9599"",
        ""gh_name"": ""Simon Willison"",
        ""gh_login"": ""simonw"",
        ""gh_email"": ""..."",
        ""gh_orgs"": [
            ""dogsheep"",
            ""datasette-project""
        ],
        ""gh_teams"": [
            ""dogsheep/test""
        ]
    }
}

The gh_orgs and gh_teams properties will only be present if you used load_teams or load_orgs, documented below.

Restricting access to specific users

You can use Datasette's permissions mechanism to specify which user or users are allowed to access your instance. Here's how to restrict access to just GitHub user simonw:

{
    ""allow"": {
        ""gh_login"": ""simonw""
    },
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""...""
        }
    }
}

This ""allow"" block can be positioned at the database, table or query level instead: see Configuring permissions in metadata.json for details.

Note that GitHub allows users to change their username, and it is possible for other people to claim old usernames. If you are concerned that your users may change their usernames you can key the allow blocks against GitHub user IDs instead, which do not change:

{
    ""allow"": {
        ""gh_id"": ""9599""
    }
}

Restricting access to specific GitHub organizations or teams

You can also restrict access to users who are members of a specific GitHub organization.

You'll need to configure the plugin to check if the user is a member of that organization when they first sign in. You can do that using the ""load_orgs"" plugin configuration option.

Then you can use ""allow"": {""gh_orgs"": [...]} to specify which organizations are allowed access.

{
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""..."",
            ""load_orgs"": [""your-organization""]
        }
    },
    ""allow"": {
        ""gh_orgs"": ""your-organization""
    }
}

If your organization is arranged into teams you can restrict access to a specific team like this:

{
    ""plugins"": {
        ""datasette-auth-github"": {
            ""..."": ""..."",
            ""load_teams"": [
                ""your-organization/staff"",
                ""your-organization/engineering"",
            ]
        }
    },
    ""allows"": {
        ""gh_team"": ""your-organization/engineering""
    }
}

What to do if a user is removed from an organization or team

A user's organization and team memberships are checked once, when they first sign in. Those teams and organizations are then persisted in the user's signed ds_actor cookie.

This means that if a user is removed from an organization or team but still has a Datasette cookie, they will still be able to access that Datasette instance.

You can remedy this by rotating the DATASETTE_SECRET environment variable any time you make changes to your GitHub organization members.

Changing this value will cause all of your existing users to be signed out, by invalidating their cookies. When they sign back in again their new memberships will be recorded in a new cookie.

See Configuring the secret in the Datasette documentation for more details.

",,,,,, 240815938,MDEwOlJlcG9zaXRvcnkyNDA4MTU5Mzg=,shapefile-to-sqlite,simonw/shapefile-to-sqlite,0,9599,https://github.com/simonw/shapefile-to-sqlite,Load shapefiles into a SQLite (optionally SpatiaLite) database,0,2020-02-16T01:55:29Z,2021-03-26T08:39:43Z,2020-08-23T06:00:41Z,,54,15,15,Python,1,1,1,1,0,0,0,0,3,apache-2.0,"[""sqlite"", ""gis"", ""spatialite"", ""shapefiles"", ""datasette"", ""datasette-io"", ""datasette-tool""]",0,3,15,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# shapefile-to-sqlite [![PyPI](https://img.shields.io/pypi/v/shapefile-to-sqlite.svg)](https://pypi.org/project/shapefile-to-sqlite/) [![CircleCI](https://circleci.com/gh/simonw/shapefile-to-sqlite.svg?style=svg)](https://circleci.com/gh/simonw/shapefile-to-sqlite) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/shapefile-to-sqlite/blob/main/LICENSE) Load shapefiles into a SQLite (optionally SpatiaLite) database. Project background: [Things I learned about shapefiles building shapefile-to-sqlite](https://simonwillison.net/2020/Feb/19/shapefile-to-sqlite/) ## How to install $ pip install shapefile-to-sqlite ## How to use You can run this tool against a shapefile file like so: $ shapefile-to-sqlite my.db features.shp This will load the geometries as GeoJSON in a text column. ## Using with SpatiaLite If you have [SpatiaLite](https://www.gaia-gis.it/fossil/libspatialite/index) available you can load them as SpatiaLite geometries like this: $ shapefile-to-sqlite my.db features.shp --spatialite The data will be loaded into a table called `features` - based on the name of the shapefile. You can specify an alternative table name using `--table`: $ shapefile-to-sqlite my.db features.shp --table=places --spatialite The tool will search for the SpatiaLite module in the following locations: - `/usr/lib/x86_64-linux-gnu/mod_spatialite.so` - `/usr/local/lib/mod_spatialite.dylib` If you have installed the module in another location, you can use the `--spatialite_mod=xxx` option to specify where: $ shapefile-to-sqlite my.db features.shp \ --spatialite_mod=/usr/lib/mod_spatialite.dylib You can use the `--spatial-index` option to create a spatial index on the `geometry` column: $ shapefile-to-sqlite my.db features.shp --spatial-index You can omit `--spatialite` if you use either `--spatialite-mod` or `--spatial-index`. ## Projections By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the [GeoJSON specification](https://tools.ietf.org/html/rfc7946). If you want it to leave the data in whatever projection was used by the shapefile, use the `--crs=keep` option. You can convert the data to another output projection by passing it to the `--crs` option. For example, to convert to [EPSG:2227](https://epsg.io/2227) (California zone 3) use `--crs=espg:2227`. The full list of formats accepted by the `--crs` option is [documented here](https://pyproj4.github.io/pyproj/stable/api/crs.html#pyproj.crs.CRS.__init__). ## Extracting columns If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the `-c` option: $ shapefile-to-sqlite my.db features.shp -c agency This will create a `agency` table with `id` and `name` columns, and will create the `agency` column in your main table as an integer foreign key reference to that table. The `-c` option can be used multiple times. [CPAD_2020a_Units](https://calands.datasettes.com/calands/CPAD_2020a_Units) is an example of a table created using the `-c` option. ","

shapefile-to-sqlite

Load shapefiles into a SQLite (optionally SpatiaLite) database.

Project background: Things I learned about shapefiles building shapefile-to-sqlite

How to install

$ pip install shapefile-to-sqlite

How to use

You can run this tool against a shapefile file like so:

$ shapefile-to-sqlite my.db features.shp

This will load the geometries as GeoJSON in a text column.

Using with SpatiaLite

If you have SpatiaLite available you can load them as SpatiaLite geometries like this:

$ shapefile-to-sqlite my.db features.shp --spatialite

The data will be loaded into a table called features - based on the name of the shapefile. You can specify an alternative table name using --table:

$ shapefile-to-sqlite my.db features.shp --table=places --spatialite

The tool will search for the SpatiaLite module in the following locations:

If you have installed the module in another location, you can use the --spatialite_mod=xxx option to specify where:

$ shapefile-to-sqlite my.db features.shp \
    --spatialite_mod=/usr/lib/mod_spatialite.dylib

You can use the --spatial-index option to create a spatial index on the geometry column:

$ shapefile-to-sqlite my.db features.shp --spatial-index

You can omit --spatialite if you use either --spatialite-mod or --spatial-index.

Projections

By default, this tool will attempt to convert geometries in the shapefile to the WGS 84 projection, for best conformance with the GeoJSON specification.

If you want it to leave the data in whatever projection was used by the shapefile, use the --crs=keep option.

You can convert the data to another output projection by passing it to the --crs option. For example, to convert to EPSG:2227 (California zone 3) use --crs=espg:2227.

The full list of formats accepted by the --crs option is documented here.

Extracting columns

If your data contains columns with a small number of heavily duplicated values - the names of specific agencies responsible for parcels of land for example - you can extract those columns into separate lookup tables referenced by foreign keys using the -c option:

$ shapefile-to-sqlite my.db features.shp -c agency

This will create a agency table with id and name columns, and will create the agency column in your main table as an integer foreign key reference to that table.

The -c option can be used multiple times.

CPAD_2020a_Units is an example of a table created using the -c option.

",,,,,, 246108561,MDEwOlJlcG9zaXRvcnkyNDYxMDg1NjE=,datasette-column-inspect,simonw/datasette-column-inspect,0,9599,https://github.com/simonw/datasette-column-inspect,Experimental plugin that adds a column inspector,0,2020-03-09T18:11:00Z,2020-12-09T21:46:10Z,2020-12-09T21:47:38Z,,15,1,1,HTML,1,1,1,1,0,0,0,0,3,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,3,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-column-inspect [![PyPI](https://img.shields.io/pypi/v/datasette-column-inspect.svg)](https://pypi.org/project/datasette-column-inspect/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-column-inspect?include_prereleases&label=changelog)](https://github.com/simonw/datasette-column-inspect/releases) [![Tests](https://github.com/simonw/datasette-column-inspect/workflows/Test/badge.svg)](https://github.com/simonw/datasette-column-inspect/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-column-inspect/blob/main/LICENSE) Highly experimental Datasette plugin for inspecting columns. ## Installation Install this plugin in the same environment as Datasette. $ pip install datasette-column-inspect ## Usage This plugin adds an icon to each column on the table page which opens an inspection side panel. ","

datasette-column-inspect

Highly experimental Datasette plugin for inspecting columns.

Installation

Install this plugin in the same environment as Datasette.

$ pip install datasette-column-inspect

Usage

This plugin adds an icon to each column on the table page which opens an inspection side panel.

",,,,,, 293164447,MDEwOlJlcG9zaXRvcnkyOTMxNjQ0NDc=,datasette-backup,simonw/datasette-backup,0,9599,https://github.com/simonw/datasette-backup,Plugin adding backup options to Datasette,0,2020-09-05T22:33:29Z,2020-09-24T00:16:59Z,2020-09-07T02:27:30Z,,6,1,1,Python,1,1,1,1,0,0,0,0,3,,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,3,1,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,1,"# datasette-backup [![PyPI](https://img.shields.io/pypi/v/datasette-backup.svg)](https://pypi.org/project/datasette-backup/) [![Changelog](https://img.shields.io/github/v/release/simonw/datasette-backup?include_prereleases&label=changelog)](https://github.com/simonw/datasette-backup/releases) [![Tests](https://github.com/simonw/datasette-backup/workflows/Test/badge.svg)](https://github.com/simonw/datasette-backup/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/datasette-backup/blob/main/LICENSE) Plugin adding backup options to Datasette ## Installation Install this plugin in the same environment as Datasette. $ datasette install datasette-backup ## Usage Once installed, you can download a SQL backup of any of your databases from: /-/backup/dbname.sql ## Development To set up this plugin locally, first checkout the code. Then create a new virtual environment: cd datasette-backup python3 -mvenv venv source venv/bin/activate Or if you are using `pipenv`: pipenv shell Now install the dependencies and tests: pip install -e '.[test]' To run the tests: pytest ","

datasette-backup

Plugin adding backup options to Datasette

Installation

Install this plugin in the same environment as Datasette.

$ datasette install datasette-backup

Usage

Once installed, you can download a SQL backup of any of your databases from:

/-/backup/dbname.sql

Development

To set up this plugin locally, first checkout the code. Then create a new virtual environment:

cd datasette-backup
python3 -mvenv venv
source venv/bin/activate

Or if you are using pipenv:

pipenv shell

Now install the dependencies and tests:

pip install -e '.[test]'

To run the tests:

pytest
",,,,,,