diff --git a/README.md b/README.md index a6b67cba6..96794bd9c 100644 --- a/README.md +++ b/README.md @@ -27,58 +27,58 @@ | Command | Description | | --- | --- | -| [apply](/src/cmd/apply.rs#L2)
✨🚀🧠🤖🔣 | Apply series of string, date, math & currency transformations to given CSV column/s. It also has some basic [NLP](https://en.wikipedia.org/wiki/Natural_language_processing) functions ([similarity](https://crates.io/crates/strsim), [sentiment analysis](https://crates.io/crates/vader_sentiment), [profanity](https://docs.rs/censor/latest/censor/), [eudex](https://github.com/ticki/eudex#eudex-a-blazingly-fast-phonetic-reductionhashing-algorithm), [language](https://crates.io/crates/whatlang) & [name gender](https://github.com/Raduc4/gender_guesser?tab=readme-ov-file#gender-guesser)) detection. | -|
[applydp](/src/cmd/applydp.rs#L2)
🚀🔣 ![CKAN](docs/images/ckan.png)| applydp is a slimmed-down version of `apply` with only [Datapusher+](https://github.com/dathere/datapusher-plus) relevant subcommands/operations (`qsvdp` binary variant only). | +| [apply](/src/cmd/apply.rs#L2)
✨🚀🧠🤖🔣👆 | Apply series of string, date, math & currency transformations to given CSV column/s. It also has some basic [NLP](https://en.wikipedia.org/wiki/Natural_language_processing) functions ([similarity](https://crates.io/crates/strsim), [sentiment analysis](https://crates.io/crates/vader_sentiment), [profanity](https://docs.rs/censor/latest/censor/), [eudex](https://github.com/ticki/eudex#eudex-a-blazingly-fast-phonetic-reductionhashing-algorithm), [language](https://crates.io/crates/whatlang) & [name gender](https://github.com/Raduc4/gender_guesser?tab=readme-ov-file#gender-guesser)) detection. | +| [applydp](/src/cmd/applydp.rs#L2)
🚀🔣👆 ![CKAN](docs/images/ckan.png)| applydp is a slimmed-down version of `apply` with only [Datapusher+](https://github.com/dathere/datapusher-plus) relevant subcommands/operations (`qsvdp` binary variant only). | | [behead](/src/cmd/behead.rs#L2) | Drop headers from a CSV. | | [cat](/src/cmd/cat.rs#L2)
🗄️ | Concatenate CSV files by row or by column. | | [count](/src/cmd/count.rs#L2)
📇🏎️🐻‍❄️ | Count the rows in a CSV file. (11.87 seconds for a 15gb, 27m row NYC 311 dataset without an index. Instantaneous with an index.) If the `polars` feature is enabled, uses Polars' multithreaded, mem-mapped CSV reader for fast counts even without an index | -| [datefmt](/src/cmd/datefmt.rs#L2)
🚀 | Formats recognized date fields ([19 formats recognized](https://docs.rs/qsv-dateparser/latest/qsv_dateparser/#accepted-date-formats)) to a specified date format using [strftime date format specifiers](https://docs.rs/chrono/latest/chrono/format/strftime/). | -| [dedup](/src/cmd/dedup.rs#L2)
🤯🚀 | Remove duplicate rows (See also `extdedup`, `extsort`, `sort` & `sortcheck` commands). | +| [datefmt](/src/cmd/datefmt.rs#L2)
🚀👆 | Formats recognized date fields ([19 formats recognized](https://docs.rs/qsv-dateparser/latest/qsv_dateparser/#accepted-date-formats)) to a specified date format using [strftime date format specifiers](https://docs.rs/chrono/latest/chrono/format/strftime/). | +| [dedup](/src/cmd/dedup.rs#L2)
🤯🚀👆 | Remove duplicate rows (See also `extdedup`, `extsort`, `sort` & `sortcheck` commands). | | [describegpt](/src/cmd/describegpt.rs#L2)
🌐🤖 | Infer extended metadata about a CSV using a GPT model from [OpenAI's API](https://platform.openai.com/docs/introduction), [Ollama](https://ollama.com), or another API compatible with the OpenAI API specification such as [Jan](https://jan.ai). | | [diff](/src/cmd/diff.rs#L2)
🚀 | Find the difference between two CSVs with ludicrous speed!
e.g. _compare two CSVs with 1M rows x 9 columns in under 600ms!_ | -| [enum](/src/cmd/enumerate.rs#L2) | Add a new column enumerating rows by adding a column of incremental or uuid identifiers. Can also be used to copy a column or fill a new column with a constant value. | +| [enum](/src/cmd/enumerate.rs#L2)
👆 | Add a new column enumerating rows by adding a column of incremental or uuid identifiers. Can also be used to copy a column or fill a new column with a constant value. | | [excel](/src/cmd/excel.rs#L2)
🚀 | Exports a specified Excel/ODS sheet to a CSV file. | -| [exclude](/src/cmd/exclude.rs#L2)
📇 | Removes a set of CSV data from another set based on the specified columns. | -| [explode](/src/cmd/explode.rs#L2)
🔣 | Explode rows into multiple ones by splitting a column value based on the given separator. | +| [exclude](/src/cmd/exclude.rs#L2)
📇👆 | Removes a set of CSV data from another set based on the specified columns. | +| [explode](/src/cmd/explode.rs#L2)
🔣👆 | Explode rows into multiple ones by splitting a column value based on the given separator. | | [extdedup](/src/cmd/extdedup.rs#L2)
| Remove duplicate rows from an arbitrarily large CSV/text file using a memory-mapped, [on-disk hash table](https://crates.io/crates/odht). Unlike the `dedup` command, this command does not load the entire file into memory nor does it sort the deduped file. | | [extsort](/src/cmd/extsort.rs#L2)
🚀 | Sort an arbitrarily large CSV/text file using a multithreaded [external merge sort](https://en.wikipedia.org/wiki/External_sorting) algorithm. | | [fetch](/src/cmd/fetch.rs#L3)
✨🧠🌐 | Fetches data from web services for every row using **HTTP Get**. Comes with [HTTP/2](https://http2-explained.haxx.se/en/part1) [adaptive flow control](https://medium.com/coderscorner/http-2-flow-control-77e54f7fd518), [jql](https://github.com/yamafaktory/jql#%EF%B8%8F-usage) JSON query language support, dynamic throttling ([RateLimit](https://www.ietf.org/archive/id/draft-ietf-httpapi-ratelimit-headers-06.html)) & caching with available persistent caching using [Redis](https://redis.io/) or a disk-cache. | | [fetchpost](/src/cmd/fetchpost.rs#L3)
✨🧠🌐 | Similar to `fetch`, but uses **HTTP Post**. ([HTTP GET vs POST methods](https://www.geeksforgeeks.org/difference-between-http-get-and-post-methods/)) | -| [fill](/src/cmd/fill.rs#L2) | Fill empty values. | +| [fill](/src/cmd/fill.rs#L2)
👆 | Fill empty values. | | [fixlengths](/src/cmd/fixlengths.rs#L2) | Force a CSV to have same-length records by either padding or truncating them. | | [flatten](/src/cmd/flatten.rs#L2) | A flattened view of CSV records. Useful for viewing one record at a time.
e.g. `qsv slice -i 5 data.csv \| qsv flatten`. | | [fmt](/src/cmd/fmt.rs#L2) | Reformat a CSV with different delimiters, record terminators or quoting rules. (Supports ASCII delimited data.) | -| [frequency](/src/cmd/frequency.rs#L2)
📇😣🏎️ | Build [frequency tables](https://statisticsbyjim.com/basics/frequency-table/) of each column. Uses multithreading to go faster if an index is present. | -| [geocode](/src/cmd/geocode.rs#L2)
✨🧠🌐🚀🔣 | Geocodes a location against an updatable local copy of the [Geonames](https://www.geonames.org/) cities database. With caching and multi-threading, it geocodes up to 360,000 records/sec! | +| [frequency](/src/cmd/frequency.rs#L2)
📇😣🏎️👆 | Build [frequency tables](https://statisticsbyjim.com/basics/frequency-table/) of each column. Uses multithreading to go faster if an index is present. | +| [geocode](/src/cmd/geocode.rs#L2)
✨🧠🌐🚀🔣👆 | Geocodes a location against an updatable local copy of the [Geonames](https://www.geonames.org/) cities database. With caching and multi-threading, it geocodes up to 360,000 records/sec! | | [headers](/src/cmd/headers.rs#L2)
🗄️ | Show the headers of a CSV. Or show the intersection of all headers between many CSV files. | | [index](/src/cmd/index.rs#L2) | Create an index (📇) for a CSV. This is very quick (even the 15gb, 28m row NYC 311 dataset takes all of 14 seconds to index) & provides constant time indexing/random access into the CSV. With an index, `count`, `sample` & `slice` work instantaneously; random access mode is enabled in `luau`; and multithreading (🏎️) is enabled for the `frequency`, `split`, `stats`, `schema` & `tojsonl` commands. | | [input](/src/cmd/input.rs#L2) | Read CSV data with special commenting, quoting, trimming, line-skipping & non-UTF8 encoding handling rules. Typically used to "normalize" a CSV for further processing with other qsv commands. | -| [join](/src/cmd/join.rs#L2) | Inner, outer, right, cross, anti & semi joins. Automatically creates a simple, in-memory hash index to make it fast. | +| [join](/src/cmd/join.rs#L2)
👆 | Inner, outer, right, cross, anti & semi joins. Automatically creates a simple, in-memory hash index to make it fast. | | [joinp](/src/cmd/joinp.rs#L2)
✨🚀🐻‍❄️ | Inner, outer, cross, anti, semi & asof joins using the [Pola.rs](https://www.pola.rs) engine. Unlike the `join` command, `joinp` can process files larger than RAM, is multithreaded, has join key validation, pre-join filtering, supports [asof joins](https://pola-rs.github.io/polars/py-polars/html/reference/dataframe/api/polars.DataFrame.join_asof.html) (which is [particularly useful for time series data](https://github.com/jqnatividad/qsv/blob/30cc920d0812a854fcbfedc5db81788a0600c92b/tests/test_joinp.rs#L509-L983)) & its output doesn't have duplicate columns. However, `joinp` doesn't have an --ignore-case option & it doesn't support right outer joins. | | [jsonl](/src/cmd/jsonl.rs#L2)
🚀🔣 | Convert newline-delimited JSON ([JSONL](https://jsonlines.org/)/[NDJSON](http://ndjson.org/)) to CSV. See `tojsonl` command to convert CSV to JSONL. | [jsonp](/src/cmd/jsonp.rs#L2)
✨🐻‍❄️ | Convert non-nested JSON to CSV. Only available with the polars feature enabled. |
[luau](/src/cmd/luau.rs#L2) 👑
✨📇🌐🔣 ![CKAN](docs/images/ckan.png) | Create multiple new computed columns, filter rows, compute aggregations and build complex data pipelines by executing a [Luau](https://luau-lang.org) [0.630](https://github.com/Roblox/luau/releases/tag/0.630) expression/script for every row of a CSV file ([sequential mode](https://github.com/jqnatividad/qsv/blob/bb72c4ef369d192d85d8b7cc6e972c1b7df77635/tests/test_luau.rs#L254-L298)), or using [random access](https://www.webopedia.com/definitions/random-access/) with an index ([random access mode](https://github.com/jqnatividad/qsv/blob/bb72c4ef369d192d85d8b7cc6e972c1b7df77635/tests/test_luau.rs#L367-L415)).
Can process a single Luau expression or [full-fledged data-wrangling scripts using lookup tables](https://github.com/dathere/qsv-lookup-tables#example) with discrete BEGIN, MAIN and END sections.
It is not just another qsv command, it is qsv's [Domain-specific Language](https://en.wikipedia.org/wiki/Domain-specific_language) (DSL) with [numerous qsv-specific helper functions](https://github.com/jqnatividad/qsv/blob/113eee17b97882dc368b2e65fec52b86df09f78b/src/cmd/luau.rs#L1356-L2290) to build production data pipelines. | -| [partition](/src/cmd/partition.rs#L2) | Partition a CSV based on a column value. | +| [partition](/src/cmd/partition.rs#L2)
👆 | Partition a CSV based on a column value. | | [prompt](/src/cmd/prompt.rs#L2) | Open a file dialog to pick a file. | -| [pseudo](/src/cmd/pseudo.rs#L2)
🔣 | [Pseudonymise](https://en.wikipedia.org/wiki/Pseudonymization) the value of the given column by replacing them with an incremental identifier. | +| [pseudo](/src/cmd/pseudo.rs#L2)
🔣👆 | [Pseudonymise](https://en.wikipedia.org/wiki/Pseudonymization) the value of the given column by replacing them with an incremental identifier. | | [py](/src/cmd/python.rs#L2)
✨🔣 | Create a new computed column or filter rows by evaluating a python expression on every row of a CSV file. Python's [f-strings](https://www.freecodecamp.org/news/python-f-strings-tutorial-how-to-use-f-strings-for-string-formatting/) is particularly useful for extended formatting, [with the ability to evaluate Python expressions as well](https://github.com/jqnatividad/qsv/blob/4cd00dca88addf0d287247fa27d40563b6d46985/src/cmd/python.rs#L23-L31). | | [rename](/src/cmd/rename.rs#L2) | Rename the columns of a CSV efficiently. | -| [replace](/src/cmd/replace.rs#L2) | Replace CSV data using a regex. Applies the regex to each field individually. | +| [replace](/src/cmd/replace.rs#L2)
👆 | Replace CSV data using a regex. Applies the regex to each field individually. | | [reverse](/src/cmd/reverse.rs#L2)
📇🤯 | Reverse order of rows in a CSV. Unlike the `sort --reverse` command, it preserves the order of rows with the same key. If an index is present, it works with constant memory. Otherwise, it will load all the data into memory. | | [safenames](/src/cmd/safenames.rs#L2)
![CKAN](docs/images/ckan.png) | Modify headers of a CSV to only have ["safe" names](/src/cmd/safenames.rs#L5-L14) - guaranteed "database-ready"/"CKAN-ready" names. | | [sample](/src/cmd/sample.rs#L2)
📇🌐🏎️ | Randomly draw rows (with optional seed) from a CSV using [reservoir sampling](https://en.wikipedia.org/wiki/Reservoir_sampling), using memory proportional to the sample size. If an index is present, using random indexing with constant memory. | -| [schema](/src/cmd/schema.rs#L2)
📇😣🏎️ | Infer schema from CSV data, replete with data type & domain/range validation & output in [JSON Schema](https://json-schema.org/) format. Uses multithreading to go faster if an index is present. See `validate` command to use the generated JSON Schema to validate if similar CSVs comply with the schema. | -| [search](/src/cmd/search.rs#L2) | Run a regex over a CSV. Applies the regex to each field individually & shows only matching rows. | -| [searchset](/src/cmd/searchset.rs#L2) | _Run multiple regexes over a CSV in a single pass._ Applies the regexes to each field individually & shows only matching rows. | -| [select](/src/cmd/select.rs#L2) | Select, re-order, duplicate or drop columns. | +| [schema](/src/cmd/schema.rs#L2)
📇😣🏎️👆 | Infer schema from CSV data, replete with data type & domain/range validation & output in [JSON Schema](https://json-schema.org/) format. Uses multithreading to go faster if an index is present. See `validate` command to use the generated JSON Schema to validate if similar CSVs comply with the schema. | +| [search](/src/cmd/search.rs#L2)
👆 | Run a regex over a CSV. Applies the regex to selected fields & shows only matching rows. | +| [searchset](/src/cmd/searchset.rs#L2)
👆 | _Run multiple regexes over a CSV in a single pass._ Applies the regexes to each field individually & shows only matching rows. | +| [select](/src/cmd/select.rs#L2)
👆 | Select, re-order, reverse, duplicate or drop columns. | | [slice](/src/cmd/slice.rs#L2)
📇🏎️ | Slice rows from any part of a CSV. When an index is present, this only has to parse the rows in the slice (instead of all rows leading up to the start of the slice). | | [snappy](/src/cmd/snappy.rs#L2)
🚀🌐 | Does streaming compression/decompression of the input using Google's [Snappy](https://github.com/google/snappy/blob/main/docs/README.md) framing format ([more info](#snappy-compressiondecompression)). | | [sniff](/src/cmd/sniff.rs#L2)
🌐 ![CKAN](docs/images/ckan.png) | Quickly sniff & infer CSV metadata (delimiter, header row, preamble rows, quote character, flexible, is_utf8, average record length, number of records, content length & estimated number of records if sniffing a CSV on a URL, number of fields, field names & data types). It is also a general mime type detector. | -| [sort](/src/cmd/sort.rs#L2)
🚀🤯 | Sorts CSV data in alphabetical (with case-insensitive option), numerical, reverse, unique or random (with optional seed) order (See also `extsort` & `sortcheck` commands). | -| [sortcheck](/src/cmd/sortcheck.rs#L2)
📇 | Check if a CSV is sorted. With the --json options, also retrieve record count, sort breaks & duplicate count. | +| [sort](/src/cmd/sort.rs#L2)
🚀🤯👆 | Sorts CSV data in alphabetical (with case-insensitive option), numerical, reverse, unique or random (with optional seed) order (See also `extsort` & `sortcheck` commands). | +| [sortcheck](/src/cmd/sortcheck.rs#L2)
📇👆 | Check if a CSV is sorted. With the --json options, also retrieve record count, sort breaks & duplicate count. | | [split](/src/cmd/split.rs#L2)
📇🏎️ | Split one CSV file into many CSV files. It can split by number of rows, number of chunks or file size. Uses multithreading to go faster if an index is present when splitting by rows or chunks. | | [sqlp](/src/cmd/sqlp.rs#L2)
✨🚀🐻‍❄️🗄️ | Run [Polars](https://pola.rs) SQL queries against several CSVs - converting queries to blazing-fast [LazyFrame](https://docs.pola.rs/user-guide/lazy/using/) expressions, processing larger than memory CSV files. | -| [stats](/src/cmd/stats.rs#L2)
📇🤯🏎️ | Compute [summary statistics](https://en.wikipedia.org/wiki/Summary_statistics) (sum, min/max/range, min/max length, mean, SEM, stddev, variance, CV, nullcount, max precision, sparsity, quartiles, IQR, lower/upper fences, skewness, median, mode/s, antimode/s & cardinality) & make GUARANTEED data type inferences (Null, String, Float, Integer, Date, DateTime, Boolean) for each column in a CSV.
Uses multithreading to go faster if an index is present (with an index, can compile "streaming" stats on NYC's 311 data (15gb, 28m rows) in less than 7.3 seconds!). | +| [stats](/src/cmd/stats.rs#L2)
📇🤯🏎️👆 | Compute [summary statistics](https://en.wikipedia.org/wiki/Summary_statistics) (sum, min/max/range, min/max length, mean, SEM, stddev, variance, CV, nullcount, max precision, sparsity, quartiles, IQR, lower/upper fences, skewness, median, mode/s, antimode/s & cardinality) & make GUARANTEED data type inferences (Null, String, Float, Integer, Date, DateTime, Boolean) for each column in a CSV.
Uses multithreading to go faster if an index is present (with an index, can compile "streaming" stats on NYC's 311 data (15gb, 28m rows) in less than 7.3 seconds!). | | [table](/src/cmd/table.rs#L2)
🤯 | Show aligned output of a CSV using [elastic tabstops](https://github.com/BurntSushi/tabwriter). To interactively view CSV files, qsv pairs well with [csvlens](https://github.com/YS-L/csvlens#csvlens). | | [to](/src/cmd/to.rs#L2)
✨🚀🗄️ | Convert CSV files to [PostgreSQL](https://www.postgresql.org), [SQLite](https://www.sqlite.org/index.html), XLSX, [Parquet](https://parquet.apache.org) and [Data Package](https://datahub.io/docs/data-packages/tabular). | | [tojsonl](/src/cmd/tojsonl.rs#L3)
📇😣🚀🔣 | Smartly converts CSV to a newline-delimited JSON ([JSONL](https://jsonlines.org/)/[NDJSON](http://ndjson.org/)). By scanning the CSV first, it "smartly" infers the appropriate JSON data type for each column. See `jsonl` command to convert JSONL to CSV. | @@ -100,6 +100,7 @@ ![CKAN](docs/images/ckan.png) : has [CKAN](https://ckan.org)-aware integration options. 🌐: has web-aware options. 🔣: requires UTF-8 encoded input. +👆: has powerful column selector support. See `select` for syntax. ## Installation Options diff --git a/src/cmd/geocode.rs b/src/cmd/geocode.rs index 590dda97f..0b6a2c901 100644 --- a/src/cmd/geocode.rs +++ b/src/cmd/geocode.rs @@ -171,6 +171,8 @@ geocode arguments: For reverse, it must be a column using WGS 84 coordinates in "lat, long" or "(lat, long)" format. For countryinfo, it must be a column with a ISO 3166-1 alpha-2 country code. + Note that you can use column selector syntax to select the column, but only + the first column will be used. See `select --help` for more information. The location to geocode for suggestnow, reversenow & countryinfonow subcommands. For suggestnow, its a City string pattern. diff --git a/src/cmd/partition.rs b/src/cmd/partition.rs index 0f2ad34d2..b14e627bd 100644 --- a/src/cmd/partition.rs +++ b/src/cmd/partition.rs @@ -28,6 +28,10 @@ Usage: partition arguments: The column to use as a key for partitioning. + You can use the `--select` option to select + the column by name or index, but only one + column can be used for partitioning. + See `select` command for more details. The directory to write the output files to. The CSV file to read from. If not specified, then the input will be read from stdin. diff --git a/src/cmd/pseudo.rs b/src/cmd/pseudo.rs index 3ea8f1157..79b571867 100644 --- a/src/cmd/pseudo.rs +++ b/src/cmd/pseudo.rs @@ -42,6 +42,13 @@ Usage: qsv pseudo [options] [] qsv pseudo --help +pseudo arguments: + The column to pseudonymise. You can use the `--select` + option to select the column by name or index. + See `select` command for more details. + The CSV file to read from. If not specified, then + the input will be read from stdin. + Common options: -h, --help Display this message --start The starting number for the incremental identifier. diff --git a/src/cmd/search.rs b/src/cmd/search.rs index 198d849df..21069a48f 100644 --- a/src/cmd/search.rs +++ b/src/cmd/search.rs @@ -1,7 +1,7 @@ static USAGE: &str = r#" Filters CSV data by whether the given regex matches a row. -The regex is applied to each field in each row, and if any field matches, +The regex is applied to selected field in each row, and if any field matches, then the row is written to the output, and the number of matches to stderr. The columns to search can be limited with the '--select' flag (but the full row diff --git a/src/cmd/select.rs b/src/cmd/select.rs index 842df421d..08bb6136a 100644 --- a/src/cmd/select.rs +++ b/src/cmd/select.rs @@ -1,8 +1,8 @@ static USAGE: &str = r#" Select columns from CSV data efficiently. -This command lets you manipulate the columns in CSV data. You can re-order -them, duplicate them or drop them. Columns can be referenced by index or by +This command lets you manipulate the columns in CSV data. You can re-order, +duplicate, reverse or drop them. Columns can be referenced by index or by name if there is a header row (duplicate column names can be disambiguated with more indexing). Column ranges can also be specified. Finally, columns can be selected using regular expressions. diff --git a/tests/test_geocode.rs b/tests/test_geocode.rs index a01b9f4a6..9a7efdeb3 100644 --- a/tests/test_geocode.rs +++ b/tests/test_geocode.rs @@ -40,6 +40,52 @@ fn geocode_suggest() { assert_eq!(got, expected); } +#[test] +fn geocode_suggest_select() { + let wrk = Workdir::new("geocode_suggest_select"); + wrk.create( + "data.csv", + vec![ + svec!["c1", "c2", "Location"], + svec!["1", "2", "Melrose, New York"], + svec!["3", "4", "East Flatbush, New York"], + svec!["5", "6", "Manhattan, New York"], + svec!["7", "8", "Brooklyn, New York"], + svec!["9", "10", "East Harlem, New York"], + svec![ + "11", + "12", + "This is not a Location and it will not be geocoded" + ], + svec!["13", "14", "Jersey City, New Jersey"], + svec!["15", "16", "95.213424, 190,1234565"], // invalid lat, long + svec!["17", "18", "Makati, Metro Manila, Philippines"], + ], + ); + let mut cmd = wrk.command("geocode"); + // use select syntax to select the last column + cmd.arg("suggest").arg("_").arg("data.csv"); + + let got: Vec> = wrk.read_stdout(&mut cmd); + let expected = vec![ + svec!["c1", "c2", "Location"], + svec!["1", "2", "(41.90059, -87.85673)"], + svec!["3", "4", "(28.11085, -82.69482)"], + svec!["5", "6", "(40.71427, -74.00597)"], + svec!["7", "8", "(45.09413, -93.35634)"], + svec!["9", "10", "(40.79472, -73.9425)"], + svec![ + "11", + "12", + "This is not a Location and it will not be geocoded" + ], + svec!["13", "14", "(40.72816, -74.07764)"], + svec!["15", "16", "95.213424, 190,1234565"], // suggest expects a city name, not lat, long + svec!["17", "18", "(14.55027, 121.03269)"], + ]; + assert_eq!(got, expected); +} + #[test] fn geocode_suggestnow_default() { let wrk = Workdir::new("geocode_suggestnow_default");