home / content

pypi_packages

7 rows where requires_dist contains "click" and requires_python = ">=3.6"

✎ View and edit SQL

This data as json, CSV (advanced)

Suggested facets: classifiers, classifiers (array), requires_dist (array)

name ▼ summary classifiers description author author_email description_content_type home_page keywords license maintainer maintainer_email package_url platform project_url project_urls release_url requires_dist requires_python version yanked yanked_reason
download-tiles Download map tiles and store them in an MBTiles database [] # download-tiles [![PyPI](https://img.shields.io/pypi/v/download-tiles.svg)](https://pypi.org/project/download-tiles/) [![Changelog](https://img.shields.io/github/v/release/simonw/download-tiles?include_prereleases&label=changelog)](https://github.com/simonw/download-tiles/releases) [![Tests](https://github.com/simonw/download-tiles/workflows/Test/badge.svg)](https://github.com/simonw/download-tiles/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/download-tiles/blob/master/LICENSE) Download map tiles and store them in an MBTiles database ## Installation Install this tool using `pip`: $ pip install download-tiles ## Usage This tool downloads tiles from a specified [TMS (Tile Map Server)](https://wiki.openstreetmap.org/wiki/TMS) server for a specified bounding box and range of zoom levels and stores those tiles in a MBTiles SQLite database. It is a command-line wrapper around the [Landez](https://github.com/makinacorpus/landez) Python libary. **Please use this tool responsibly**. Consult the usage policies of the tile servers you are interacting with, for example the [OpenStreetMap Tile Usage Policy](https://operations.osmfoundation.org/policies/tiles/). Running the following will download zoom levels 0-3 of OpenStreetMap, 85 tiles total, and store them in a SQLite database called `world.mbtiles`: download-tiles world.mbtiles You can customize which tile and zoom levels are downloaded using command options: `--zoom-levels=0-3` or `-z=0-3` The different zoom levels to download. Specify a single number, e.g. `15`, or a range of numbers e.g. `0-4`. Be careful with this setting as you can easily go over the limits requested by the underlying tile server. `--bbox=3.9,-6.3,14.5,10.2` or `-b=3.9,-6.3,14.5,10.2` The bounding box to fetch. Should be specified as `min-lon,min-lat,max-lon,max-lat`. You can use [bboxfinder.com](http://bboxfinder.com/) to find these for different areas. `--city=london` or `--country=madagas… Simon Willison   text/markdown https://github.com/simonw/download-tiles   Apache License, Version 2.0     https://pypi.org/project/download-tiles/   https://pypi.org/project/download-tiles/ {"CI": "https://github.com/simonw/download-tiles/actions", "Changelog": "https://github.com/simonw/download-tiles/releases", "Homepage": "https://github.com/simonw/download-tiles", "Issues": "https://github.com/simonw/download-tiles/issues"} https://pypi.org/project/download-tiles/0.4.1/ ["click", "requests", "landez (==2.5.0)", "pytest ; extra == 'test'", "requests-mock ; extra == 'test'"] >=3.6 0.4.1 0  
git-history Tools for analyzing Git history using SQLite [] # git-history [![PyPI](https://img.shields.io/pypi/v/git-history.svg)](https://pypi.org/project/git-history/) [![Changelog](https://img.shields.io/github/v/release/simonw/git-history?include_prereleases&label=changelog)](https://github.com/simonw/git-history/releases) [![Tests](https://github.com/simonw/git-history/workflows/Test/badge.svg)](https://github.com/simonw/git-history/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/git-history/blob/master/LICENSE) Tools for analyzing Git history using SQLite For background on this project see [git-history: a tool for analyzing scraped data collected using Git and SQLite](https://simonwillison.net/2021/Dec/7/git-history/) ## Installation Install this tool using `pip`: $ pip install git-history ## Demos [git-history-demos.datasette.io](http://git-history-demos.datasette.io/) hosts three example databases created using this tool: - [pge-outages](https://git-history-demos.datasette.io/pge-outages) shows a history of PG&E (the electricity supplier) [outages](https://pgealerts.alerts.pge.com/outagecenter/), using data collected in [simonw/pge-outages](https://github.com/simonw/pge-outages) converted using [pge-outages.sh](https://github.com/simonw/git-history/blob/main/demos/pge-outages.sh) - [ca-fires](https://git-history-demos.datasette.io/ca-fires) shows a history of fires in California reported on [fire.ca.gov/incidents](https://www.fire.ca.gov/incidents/), from data in [simonw/ca-fires-history](https://github.com/simonw/ca-fires-history) converted using [ca-fires.sh](https://github.com/simonw/git-history/blob/main/demos/ca-fires.sh) - [sf-bay-511](https://git-history-demos.datasette.io/sf-bay-511) has records of San Francisco Bay Area traffic and transit incident data from [511.org](https://511.org/), collected in [dbreunig/511-events-history](https://github.com/dbreunig/511-events-history) converted using [sf-bay-511.sh](https://github.com/simonw/git-history/blob/main/demos/s… Simon Willison   text/markdown https://github.com/simonw/git-history   Apache License, Version 2.0     https://pypi.org/project/git-history/   https://pypi.org/project/git-history/ {"CI": "https://github.com/simonw/git-history/actions", "Changelog": "https://github.com/simonw/git-history/releases", "Homepage": "https://github.com/simonw/git-history", "Issues": "https://github.com/simonw/git-history/issues"} https://pypi.org/project/git-history/0.6.1/ ["click", "GitPython", "sqlite-utils (>=3.19)", "pytest ; extra == 'test'", "cogapp ; extra == 'test'"] >=3.6 0.6.1 0  
google-drive-to-sqlite Create a SQLite database containing metadata from Google Drive [] # google-drive-to-sqlite [![PyPI](https://img.shields.io/pypi/v/google-drive-to-sqlite.svg)](https://pypi.org/project/google-drive-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/simonw/google-drive-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/google-drive-to-sqlite/releases) [![Tests](https://github.com/simonw/google-drive-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/google-drive-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/google-drive-to-sqlite/blob/master/LICENSE) Create a SQLite database containing metadata from [Google Drive](https://www.google.com/drive) If you use Google Drive, and especially if you have shared drives with other people there's a good chance you have hundreds or even thousands of files that you may not be fully aware of. This tool can download metadata about those files - their names, sizes, folders, content types, permissions, creation dates and more - and store them in a SQLite database. This lets you use SQL to analyze your Google Drive contents, using [Datasette](https://datasette.io/) or the SQLite command-line tool or any other SQLite database browsing software. ## Installation Install this tool using `pip`: pip install google-drive-to-sqlite ## Quickstart Authenticate with Google Drive by running: google-drive-to-sqlite auth Now create a SQLite database with metadata about all of the files you have starred using: google-drive-to-sqlite files starred.db --starred You can explore the resulting database using [Datasette](https://datasette.io/): $ pip install datasette $ datasette starred.db INFO: Started server process [24661] INFO: Uvicorn running on http://127.0.0.1:8001 ## Authentication > :warning: **This application has not yet been verified by Google** - you may find you are unable to authenticate until that verification is complete. [#10](https://github.com/simonw/googl… Simon Willison   text/markdown https://github.com/simonw/google-drive-to-sqlite   Apache License, Version 2.0     https://pypi.org/project/google-drive-to-sqlite/   https://pypi.org/project/google-drive-to-sqlite/ {"CI": "https://github.com/simonw/google-drive-to-sqlite/actions", "Changelog": "https://github.com/simonw/google-drive-to-sqlite/releases", "Homepage": "https://github.com/simonw/google-drive-to-sqlite", "Issues": "https://github.com/simonw/google-drive-to-sqlite/issues"} https://pypi.org/project/google-drive-to-sqlite/0.4/ ["click", "httpx", "sqlite-utils", "pytest ; extra == 'test'", "pytest-httpx ; extra == 'test'", "pytest-mock ; extra == 'test'", "cogapp ; extra == 'test'"] >=3.6 0.4 0  
markdown-to-sqlite CLI tool for loading markdown files into a SQLite database ["Intended Audience :: Developers", "Intended Audience :: End Users/Desktop", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Topic :: Database"] # markdown-to-sqlite [![PyPI](https://img.shields.io/pypi/v/markdown-to-sqlite.svg)](https://pypi.python.org/pypi/markdown-to-sqlite) [![Changelog](https://img.shields.io/github/v/release/simonw/markdown-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/markdown-to-sqlite/releases) [![Tests](https://github.com/simonw/markdown-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/markdown-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/markdown-to-sqlite/blob/main/LICENSE) CLI tool for loading markdown files into a SQLite database. YAML embedded in the markdown files will be used to populate additional columns. Usage: markdown-to-sqlite [OPTIONS] DBNAME TABLE PATHS... For example: $ markdown-to-sqlite docs.db documents file1.md file2.md ## Breaking change Prior to version 1.0 this argument order was different - markdown files were listed before the database and table. Simon Willison   text/markdown https://github.com/simonw/markdown-to-sqlite   Apache License, Version 2.0     https://pypi.org/project/markdown-to-sqlite/   https://pypi.org/project/markdown-to-sqlite/ {"CI": "https://github.com/simonw/markdown-to-sqlite/actions", "Changelog": "https://github.com/simonw/markdown-to-sqlite/releases", "Homepage": "https://github.com/simonw/markdown-to-sqlite", "Issues": "https://github.com/simonw/markdown-to-sqlite/issues"} https://pypi.org/project/markdown-to-sqlite/1.0/ ["yamldown", "markdown", "sqlite-utils", "click", "pytest ; extra == 'test'"] >=3.6 1.0 0  
s3-credentials A tool for creating credentials for accessing S3 buckets [] # s3-credentials [![PyPI](https://img.shields.io/pypi/v/s3-credentials.svg)](https://pypi.org/project/s3-credentials/) [![Changelog](https://img.shields.io/github/v/release/simonw/s3-credentials?include_prereleases&label=changelog)](https://github.com/simonw/s3-credentials/releases) [![Tests](https://github.com/simonw/s3-credentials/workflows/Test/badge.svg)](https://github.com/simonw/s3-credentials/actions?query=workflow%3ATest) [![Documentation Status](https://readthedocs.org/projects/s3-credentials/badge/?version=latest)](https://s3-credentials.readthedocs.org/) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/s3-credentials/blob/master/LICENSE) A tool for creating credentials for accessing S3 buckets For project background, see [s3-credentials: a tool for creating credentials for S3 buckets](https://simonwillison.net/2021/Nov/3/s3-credentials/) on my blog. ## Installation pip install s3-credentials ## Basic usage To create a new S3 bucket and output credentials that can be used with only that bucket: ``` % s3-credentials create my-new-s3-bucket --create-bucket Created bucket: my-new-s3-bucket Created user: s3.read-write.my-new-s3-bucket with permissions boundary: arn:aws:iam::aws:policy/AmazonS3FullAccess Attached policy s3.read-write.my-new-s3-bucket to user s3.read-write.my-new-s3-bucket Created access key for user: s3.read-write.my-new-s3-bucket { "UserName": "s3.read-write.my-new-s3-bucket", "AccessKeyId": "AKIAWXFXAIOZOYLZAEW5", "Status": "Active", "SecretAccessKey": "...", "CreateDate": "2021-11-03 01:38:24+00:00" } ``` The tool can do a lot more than this. See the [documentation](https://s3-credentials.readthedocs.io/) for details. ## Documentation - [Full documentation](https://s3-credentials.readthedocs.io/) - [Command help reference](https://s3-credentials.readthedocs.io/en/stable/help.html) - [Release notes](https://github.com/simonw/s3-credentials/releases) Simon Willison   text/markdown https://github.com/simonw/s3-credentials   Apache License, Version 2.0     https://pypi.org/project/s3-credentials/   https://pypi.org/project/s3-credentials/ {"CI": "https://github.com/simonw/s3-credentials/actions", "Changelog": "https://github.com/simonw/s3-credentials/releases", "Homepage": "https://github.com/simonw/s3-credentials", "Issues": "https://github.com/simonw/s3-credentials/issues"} https://pypi.org/project/s3-credentials/0.14/ ["click", "boto3", "pytest ; extra == 'test'", "pytest-mock ; extra == 'test'", "cogapp ; extra == 'test'", "moto[s3] ; extra == 'test'"] >=3.6 0.14 0  
sqlite-utils CLI tool and Python utility functions for manipulating SQLite databases ["Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Intended Audience :: End Users/Desktop", "Intended Audience :: Science/Research", "License :: OSI Approved :: Apache Software License", "Programming Language :: Python :: 3.10", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: 3.9", "Topic :: Database"] # sqlite-utils [![PyPI](https://img.shields.io/pypi/v/sqlite-utils.svg)](https://pypi.org/project/sqlite-utils/) [![Changelog](https://img.shields.io/github/v/release/simonw/sqlite-utils?include_prereleases&label=changelog)](https://sqlite-utils.datasette.io/en/stable/changelog.html) [![Python 3.x](https://img.shields.io/pypi/pyversions/sqlite-utils.svg?logo=python&logoColor=white)](https://pypi.org/project/sqlite-utils/) [![Tests](https://github.com/simonw/sqlite-utils/workflows/Test/badge.svg)](https://github.com/simonw/sqlite-utils/actions?query=workflow%3ATest) [![Documentation Status](https://readthedocs.org/projects/sqlite-utils/badge/?version=stable)](http://sqlite-utils.datasette.io/en/stable/?badge=stable) [![codecov](https://codecov.io/gh/simonw/sqlite-utils/branch/main/graph/badge.svg)](https://codecov.io/gh/simonw/sqlite-utils) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/sqlite-utils/blob/main/LICENSE) [![discord](https://img.shields.io/discord/823971286308356157?label=discord)](https://discord.gg/Ass7bCAMDw) Python CLI utility and library for manipulating SQLite databases. ## Some feature highlights - [Pipe JSON](https://sqlite-utils.datasette.io/en/stable/cli.html#inserting-json-data) (or [CSV or TSV](https://sqlite-utils.datasette.io/en/stable/cli.html#inserting-csv-or-tsv-data)) directly into a new SQLite database file, automatically creating a table with the appropriate schema - [Run in-memory SQL queries](https://sqlite-utils.datasette.io/en/stable/cli.html#querying-data-directly-using-an-in-memory-database), including joins, directly against data in CSV, TSV or JSON files and view the results - [Configure SQLite full-text search](https://sqlite-utils.datasette.io/en/stable/cli.html#configuring-full-text-search) against your database tables and run search queries against them, ordered by relevance - Run [transformations against your tables](https://sqlite-utils.datasette.io/en/stable/cli.html#transforming-tables) to make schema changes t… Simon Willison   text/markdown https://github.com/simonw/sqlite-utils   Apache License, Version 2.0     https://pypi.org/project/sqlite-utils/   https://pypi.org/project/sqlite-utils/ {"CI": "https://github.com/simonw/sqlite-utils/actions", "Changelog": "https://sqlite-utils.datasette.io/en/stable/changelog.html", "Documentation": "https://sqlite-utils.datasette.io/en/stable/", "Homepage": "https://github.com/simonw/sqlite-utils", "Issues": "https://github.com/simonw/sqlite-utils/issues", "Source code": "https://github.com/simonw/sqlite-utils"} https://pypi.org/project/sqlite-utils/3.30/ ["sqlite-fts4", "click", "click-default-group-wheel", "tabulate", "python-dateutil", "furo ; extra == 'docs'", "sphinx-autobuild ; extra == 'docs'", "codespell ; extra == 'docs'", "sphinx-copybutton ; extra == 'docs'", "beanbag-docutils (>=2.0) ; extra == 'docs'", "flake8 ; extra == 'flake8'", "mypy ; extra == 'mypy'", "types-click ; extra == 'mypy'", "types-tabulate ; extra == 'mypy'", "types-python-dateutil ; extra == 'mypy'", "data-science-types ; extra == 'mypy'", "pytest ; extra == 'test'", "black ; extra == 'test'", "hypothesis ; extra == 'test'", "cogapp ; extra == 'test'"] >=3.6 3.30 0  
tableau-to-sqlite Fetch data from Tableau into a SQLite database [] # tableau-to-sqlite [![PyPI](https://img.shields.io/pypi/v/tableau-to-sqlite.svg)](https://pypi.org/project/tableau-to-sqlite/) [![Changelog](https://img.shields.io/github/v/release/simonw/tableau-to-sqlite?include_prereleases&label=changelog)](https://github.com/simonw/tableau-to-sqlite/releases) [![Tests](https://github.com/simonw/tableau-to-sqlite/workflows/Test/badge.svg)](https://github.com/simonw/tableau-to-sqlite/actions?query=workflow%3ATest) [![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](https://github.com/simonw/tableau-to-sqlite/blob/master/LICENSE) Fetch data from Tableau into a SQLite database. A wrapper around [TableauScraper](https://github.com/bertrandmartel/tableau-scraping/). ## Installation Install this tool using `pip`: $ pip install tableau-to-sqlite ## Usage If you have the URL to a Tableau dashboard like this: https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations You can pass that directly to the tool: tableau-to-sqlite tableau.db \ https://results.mo.gov/t/COVID19/views/VaccinationsDashboard/Vaccinations This will create a SQLite database called `tableau.db` containing one table for each of the worksheets in that dashboard. If the dashboard is hosted on https://public.tableau.com/ you can instead provide the view name. This will be two strings separated by a `/` symbol - something like this: OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment Now run the tool like this: tableau-to-sqlite tableau.db \ OregonCOVID-19VaccineProviderEnrollment/COVID-19VaccineProviderEnrollment ## Get the data as JSON or CSV If you're building a [git scraper](https://simonwillison.net/2020/Oct/9/git-scraping/) you may want to convert the data gathered by this tool to CSV or JSON to check into your repository. You can do that using [sqlite-utils](https://sqlite-utils.datasette.io/). Install it using `pip`: pip install sqlite-utils You can dump out a table as JSON like so: sqlite-utils ro… Simon Willison   text/markdown https://github.com/simonw/tableau-to-sqlite   Apache License, Version 2.0     https://pypi.org/project/tableau-to-sqlite/   https://pypi.org/project/tableau-to-sqlite/ {"CI": "https://github.com/simonw/tableau-to-sqlite/actions", "Changelog": "https://github.com/simonw/tableau-to-sqlite/releases", "Homepage": "https://github.com/simonw/tableau-to-sqlite", "Issues": "https://github.com/simonw/tableau-to-sqlite/issues"} https://pypi.org/project/tableau-to-sqlite/0.2.1/ ["click", "TableauScraper (==0.1.2)", "pytest ; extra == 'test'", "vcrpy ; extra == 'test'"] >=3.6 0.2.1 0  

Advanced export

JSON shape: default, array, newline-delimited, object

CSV options:

CREATE TABLE [pypi_packages] (
   [name] TEXT PRIMARY KEY,
   [summary] TEXT,
   [classifiers] TEXT,
   [description] TEXT,
   [author] TEXT,
   [author_email] TEXT,
   [description_content_type] TEXT,
   [home_page] TEXT,
   [keywords] TEXT,
   [license] TEXT,
   [maintainer] TEXT,
   [maintainer_email] TEXT,
   [package_url] TEXT,
   [platform] TEXT,
   [project_url] TEXT,
   [project_urls] TEXT,
   [release_url] TEXT,
   [requires_dist] TEXT,
   [requires_python] TEXT,
   [version] TEXT,
   [yanked] INTEGER,
   [yanked_reason] TEXT
);
Powered by Datasette · Queries took 48.515ms