id,node_id,name,full_name,private,owner,html_url,description,fork,created_at,updated_at,pushed_at,homepage,size,stargazers_count,watchers_count,language,has_issues,has_projects,has_downloads,has_wiki,has_pages,forks_count,archived,disabled,open_issues_count,license,topics,forks,open_issues,watchers,default_branch,permissions,temp_clone_token,organization,network_count,subscribers_count,readme,readme_html,allow_forking,visibility,is_template,template_repository,web_commit_signoff_required,has_discussions
195696804,MDEwOlJlcG9zaXRvcnkxOTU2OTY4MDQ=,datasette-cors,simonw/datasette-cors,0,9599,https://github.com/simonw/datasette-cors,Datasette plugin for configuring CORS headers,0,2019-07-07T21:03:11Z,2021-02-27T00:31:13Z,2019-07-11T04:40:57Z,,11,9,9,Python,1,1,1,1,0,0,0,0,1,apache-2.0,"[""datasette"", ""datasette-plugin"", ""datasette-io""]",0,1,9,master,"{""admin"": false, ""push"": false, ""pull"": false}",,,0,3,"# datasette-cors
[](https://pypi.org/project/datasette-cors/)
[](https://circleci.com/gh/simonw/datasette-cors)
[](https://github.com/simonw/datasette-cors/blob/master/LICENSE)
Datasette plugin for configuring CORS headers, based on https://github.com/simonw/asgi-cors
You can use this plugin to allow JavaScript running on a whitelisted set of domains to make `fetch()` calls to the JSON API provided by your Datasette instance.
## Installation
pip install datasette-cors
## Configuration
You need to add some configuration to your Datasette `metadata.json` file for this plugin to take effect.
To whitelist specific domains, use this:
```json
{
""plugins"": {
""datasette-cors"": {
""hosts"": [""https://www.example.com""]
}
}
}
```
You can also whitelist patterns like this:
```json
{
""plugins"": {
""datasette-cors"": {
""host_wildcards"": [""https://*.example.com""]
}
}
}
```
## Testing it
To test this plugin out, run it locally by saving one of the above examples as `metadata.json` and running this:
$ datasette --memory -m metadata.json
Now visit https://www.example.com/ in your browser, open the browser developer console and paste in the following:
```javascript
fetch(""http://127.0.0.1:8001/:memory:.json?sql=select+sqlite_version%28%29"").then(r => r.json()).then(console.log)
```
If the plugin is running correctly, you will see the JSON response output to the console.
","
You can use this plugin to allow JavaScript running on a whitelisted set of domains to make fetch() calls to the JSON API provided by your Datasette instance.
Installation
pip install datasette-cors
Configuration
You need to add some configuration to your Datasette metadata.json file for this plugin to take effect.
If the plugin is running correctly, you will see the JSON response output to the console.
",,,,,,
205429375,MDEwOlJlcG9zaXRvcnkyMDU0MjkzNzU=,swarm-to-sqlite,dogsheep/swarm-to-sqlite,0,53015001,https://github.com/dogsheep/swarm-to-sqlite,Create a SQLite database containing your checkin history from Foursquare Swarm,0,2019-08-30T17:37:29Z,2021-02-22T07:58:39Z,2021-01-18T04:36:03Z,,49,37,37,Python,1,1,1,1,0,1,0,0,1,apache-2.0,"[""sqlite"", ""foursquare"", ""swarm"", ""foursquare-api"", ""datasette"", ""dogsheep"", ""datasette-io"", ""datasette-tool""]",1,1,37,main,"{""admin"": false, ""push"": false, ""pull"": false}",,53015001,1,3,"# swarm-to-sqlite
[](https://pypi.org/project/swarm-to-sqlite/)
[](https://github.com/dogsheep/swarm-to-sqlite/releases)
[](https://github.com/dogsheep/swarm-to-sqlite/actions?query=workflow%3ATest)
[](https://github.com/dogsheep/swarm-to-sqlite/blob/main/LICENSE)
Create a SQLite database containing your checkin history from Foursquare Swarm.
## How to install
$ pip install swarm-to-sqlite
## Usage
You will need to first obtain a valid OAuth token for your Foursquare account. You can do so using this tool: https://your-foursquare-oauth-token.glitch.me/
Simplest usage is to simply provide the name of the database file you wish to write to. The tool will prompt you to paste in your token, and will then download your checkins and store them in the specified database file.
$ swarm-to-sqlite checkins.db
Please provide your Foursquare OAuth token:
Importing 3699 checkins [#########-----------------------] 27% 00:02:31
You can also pass the token as a command-line option:
$ swarm-to-sqlite checkins.db --token=XXX
Or as an environment variable:
$ export FOURSQUARE_TOKEN=XXX
$ swarm-to-sqlite checkins.db
To retrieve just checkins within the past X hours, days or weeks, use the `--since=` option. For example, to pull only checkins that happened within the last 10 days use:
$ swarm-to-sqlite checkins.db --token=XXX --since=10d
Use `2w` for two weeks, `10h` for ten hours, `3d` for three days.
In addition to saving the checkins to a database, you can also write them to a JSON file using the `--save` option:
$ swarm-to-sqlite checkins.db --save=checkins.json
Having done this, you can re-import checkins directly from that file (rather than making API calls to fetch data from Foursquare) like this:
$ swarm-to-sqlite checkins.db --load=checkins.json
## Using with Datasette
The SQLite database produced by this tool is designed to be browsed using [Datasette](https://datasette.io/).
You can install the [datasette-cluster-map](https://datasette.io/plugins/datasette-cluster-map) plugin to view your checkins on a map.
","
swarm-to-sqlite
Create a SQLite database containing your checkin history from Foursquare Swarm.
Simplest usage is to simply provide the name of the database file you wish to write to. The tool will prompt you to paste in your token, and will then download your checkins and store them in the specified database file.
$ swarm-to-sqlite checkins.db
Please provide your Foursquare OAuth token:
Importing 3699 checkins [#########-----------------------] 27% 00:02:31
You can also pass the token as a command-line option:
To retrieve just checkins within the past X hours, days or weeks, use the --since= option. For example, to pull only checkins that happened within the last 10 days use:
",,,,,,
206649770,MDEwOlJlcG9zaXRvcnkyMDY2NDk3NzA=,google-takeout-to-sqlite,dogsheep/google-takeout-to-sqlite,0,53015001,https://github.com/dogsheep/google-takeout-to-sqlite,Save data from Google Takeout to a SQLite database,0,2019-09-05T20:15:15Z,2021-06-08T15:31:47Z,2021-02-24T00:34:55Z,,14,51,51,Python,1,1,1,1,0,4,0,0,6,apache-2.0,"[""google"", ""sqlite"", ""datasette"", ""dogsheep"", ""datasette-io"", ""datasette-tool""]",4,6,51,master,"{""admin"": false, ""push"": false, ""pull"": false}",,53015001,4,3,"# google-takeout-to-sqlite
[](https://pypi.org/project/google-takeout-to-sqlite/)
[](https://circleci.com/gh/dogsheep/google-takeout-to-sqlite)
[](https://github.com/dogsheep/google-takeout-to-sqlite/blob/master/LICENSE)
Save data from google-takeout to a SQLite database.
## How to install
$ pip install google-takeout-to-sqlite
Request your Google data from https://takeout.google.com/ - wait for the email and download the zip file.
This tool only supports a subset of the available options. More will be added over time.
## My Activity
You can request the ""My Activity"" export and then import it with the following command:
$ google-takeout-to-sqlite my-activity takeout.db ~/Downloads/takeout-20190530.zip
This will create a database file called `takeout.db` if one does not already exist.
## Location History
Your location history records latitude, longitude and timestame for where Google has tracked your location. You can import it using this command:
$ google-takeout-to-sqlite location-history takeout.db ~/Downloads/takeout-20190530.zip
## Browsing your data with Datasette
Once you have imported Google data into a SQLite database file you can browse your data using [Datasette](https://github.com/simonw/datasette). Install Datasette like so:
$ pip install datasette
Now browse your data by running this and then visiting `http://localhost:8001/`
$ datasette takeout.db
Install the [datasette-cluster-map](https://github.com/simonw/datasette-cluster-map) plugin to see your location history on a map:
$ pip install datasette-cluster-map
","
google-takeout-to-sqlite
Save data from google-takeout to a SQLite database.
",,,,,,
291339086,MDEwOlJlcG9zaXRvcnkyOTEzMzkwODY=,airtable-export,simonw/airtable-export,0,9599,https://github.com/simonw/airtable-export,"Export Airtable data to YAML, JSON or SQLite files on disk",0,2020-08-29T19:51:37Z,2021-06-08T17:30:30Z,2021-04-09T23:41:52Z,https://datasette.io/tools/airtable-export,41,33,33,Python,1,1,1,1,0,5,0,0,6,apache-2.0,"[""yaml"", ""airtable"", ""airtable-api"", ""datasette-io"", ""datasette-tool""]",5,6,33,main,"{""admin"": false, ""push"": false, ""pull"": false}",,,5,3,"# airtable-export
[](https://pypi.org/project/airtable-export/)
[](https://github.com/simonw/airtable-export/releases)
[](https://github.com/simonw/airtable-export/actions?query=workflow%3ATest)
[](https://github.com/simonw/airtable-export/blob/master/LICENSE)
Export Airtable data to files on disk
## Installation
Install this tool using `pip`:
$ pip install airtable-export
## Usage
You will need to know the following information:
- Your Airtable base ID - this is a string starting with `app...`
- Your Airtable API key - this is a string starting with `key...`
- The names of each of the tables that you wish to export
You can export all of your data to a folder called `export/` by running the following:
airtable-export export base_id table1 table2 --key=key
This example would create two files: `export/table1.yml` and `export/table2.yml`.
Rather than passing the API key using the `--key` option you can set it as an environment variable called `AIRTABLE_KEY`.
## Export options
By default the tool exports your data as YAML.
You can also export as JSON or as [newline delimited JSON](http://ndjson.org/) using the `--json` or `--ndjson` options:
airtable-export export base_id table1 table2 --key=key --ndjson
You can pass multiple format options at once. This command will create a `.json`, `.yml` and `.ndjson` file for each exported table:
airtable-export export base_id table1 table2 \
--key=key --ndjson --yaml --json
### SQLite database export
You can export tables to a SQLite database file using the `--sqlite database.db` option:
airtable-export export base_id table1 table2 \
--key=key --sqlite database.db
This can be combined with other format options. If you only specify `--sqlite` the export directory argument will be ignored.
The SQLite database will have a table created for each table you export. Those tables will have a primary key column called `airtable_id`.
If you run this command against an existing SQLite database records with matching primary keys will be over-written by new records from the export.
## Request options
By default the tool uses [python-httpx](https://www.python-httpx.org)'s default configurations.
You can override the `user-agent` using the `--user-agent` option:
airtable-export export base_id table1 table2 --key=key --user-agent ""Airtable Export Robot""
You can override the [timeout during a network read operation](https://www.python-httpx.org/advanced/#fine-tuning-the-configuration) using the `--http-read-timeout` option. If not set, this defaults to 5s.
airtable-export export base_id table1 table2 --key=key --http-read-timeout 60
## Running this using GitHub Actions
[GitHub Actions](https://github.com/features/actions) is GitHub's workflow automation product. You can use it to run `airtable-export` in order to back up your Airtable data to a GitHub repository. Doing this gives you a visible commit history of changes you make to your Airtable data - like [this one](https://github.com/natbat/rockybeaches/commits/main/airtable).
To run this for your own Airtable database you'll first need to add the following secrets to your GitHub repository:
AIRTABLE_BASE_ID
The base ID, a string beginning `app...`
AIRTABLE_KEY
Your Airtable API key
AIRTABLE_TABLES
A space separated list of the Airtable tables that you want to backup. If any of these contain spaces you will need to enclose them in single quotes, e.g. 'My table with spaces in the name' OtherTableWithNoSpaces
Once you have set those secrets, add the following as a file called `.github/workflows/backup-airtable.yml`:
```yaml
name: Backup Airtable
on:
workflow_dispatch:
schedule:
- cron: '32 0 * * *'
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Check out repo
uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: 3.8
- uses: actions/cache@v2
name: Configure pip caching
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-
restore-keys: |
${{ runner.os }}-pip-
- name: Install airtable-export
run: |
pip install airtable-export
- name: Backup Airtable to backups/
env:
AIRTABLE_BASE_ID: ${{ secrets.AIRTABLE_BASE_ID }}
AIRTABLE_KEY: ${{ secrets.AIRTABLE_KEY }}
AIRTABLE_TABLES: ${{ secrets.AIRTABLE_TABLES }}
run: |-
airtable-export backups $AIRTABLE_BASE_ID $AIRTABLE_TABLES -v
- name: Commit and push if it changed
run: |-
git config user.name ""Automated""
git config user.email ""actions@users.noreply.github.com""
git add -A
timestamp=$(date -u)
git commit -m ""Latest data: ${timestamp}"" || exit 0
git push
```
This will run once a day (at 32 minutes past midnight UTC) and will also run if you manually click the ""Run workflow"" button, see [GitHub Actions: Manual triggers with workflow_dispatch](https://github.blog/changelog/2020-07-06-github-actions-manual-triggers-with-workflow_dispatch/).
## Development
To contribute to this tool, first checkout the code. Then create a new virtual environment:
cd airtable-export
python -mvenv venv
source venv/bin/activate
Or if you are using `pipenv`:
pipenv shell
Now install the dependencies and tests:
pip install -e '.[test]'
To run the tests:
pytest
","
airtable-export
Export Airtable data to files on disk
Installation
Install this tool using pip:
$ pip install airtable-export
Usage
You will need to know the following information:
Your Airtable base ID - this is a string starting with app...
Your Airtable API key - this is a string starting with key...
The names of each of the tables that you wish to export
You can export all of your data to a folder called export/ by running the following:
GitHub Actions is GitHub's workflow automation product. You can use it to run airtable-export in order to back up your Airtable data to a GitHub repository. Doing this gives you a visible commit history of changes you make to your Airtable data - like this one.
To run this for your own Airtable database you'll first need to add the following secrets to your GitHub repository:
AIRTABLE_BASE_ID
The base ID, a string beginning `app...`
AIRTABLE_KEY
Your Airtable API key
AIRTABLE_TABLES
A space separated list of the Airtable tables that you want to backup. If any of these contain spaces you will need to enclose them in single quotes, e.g. 'My table with spaces in the name' OtherTableWithNoSpaces
Once you have set those secrets, add the following as a file called .github/workflows/backup-airtable.yml: