For high-volume use cases (>1 M lookups/day) or warehouse loads, pull the full dataset on a schedule instead of hammering the API.Documentation Index
Fetch the complete documentation index at: https://vpn-docs.wxapros.com/llms.txt
Use this file to discover all available pages before exploring further.
Format choice
| Format | Best for | Tier required |
|---|---|---|
| CSV | Spreadsheets, ad-hoc analysis, simple ETL | Starter |
| MMDB | Sub-ms local lookups in production code (Python, Go, Node, Java) | Pro |
| Parquet | Snowflake / BigQuery / Athena / Spark loads | Business |
Scheduling
Refresh once an hour at most. The dataset rolls over with new observations roughly every 60 minutes. Pulling more often wastes bandwidth.tmp → final) so readers never see a partial file.
MMDB lookup (Python)
dict for unknown IPs.
Parquet to Snowflake
Concurrency
Bulk downloads count against a separatebulk concurrent limit (1 for
Starter, 2 for Pro, 5 for Business, negotiated for Enterprise). Don’t
parallelize a single export — start one, wait for completion, start
the next.
File size
| Format | Approx size (full dataset) |
|---|---|
| CSV | ~3 GB compressed (gzip), ~15 GB uncompressed |
| MMDB | ~600 MB |
| Parquet | ~1.2 GB |
Accept-Encoding: gzip.