- gTTS is now built as a wheel package (Python 2 & 3) (#181)
- Added new tokenizer case for ':' preventing cut in the middle of a time notation (#135)
- Added Python 3.7 support, modernization of packaging, testing and CI (#126)
- Fixed language retrieval/validation broken from new Google Translate page (#156)
- Fixed an UnicodeDecodeError when installing gTTS if system locale was not utf-8 (#120)
- Added Pre-processing and tokenizing > Minimizing section about the API's 100 characters limit and how larger tokens are handled (#121)
(#108)
- The
gtts
module- New logger ("gtts") replaces all occurrences of
print()
- Languages list is now obtained automatically (
gtts.lang
) (#91, #94, #106) - Added a curated list of language sub-tags that have been observed to provide different dialects or accents (e.g. "en-gb", "fr-ca")
- New
gTTS()
parameterlang_check
to disable language checking. gTTS()
now delegates thetext
tokenizing to the API request methods (i.e.write_to_fp()
,save()
), allowinggTTS
instances to be modified/reused- Rewrote tokenizing and added pre-processing (see below)
- New
gTTS()
parameterspre_processor_funcs
andtokenizer_func
to configure pre-processing and tokenizing (or use a 3rd party tokenizer) - Error handling:
- Added new exception
gTTSError
raised on API request errors. It attempts to guess what went wrong based on known information and observed behaviour (#60, #106) gTTS.write_to_fp()
andgTTS.save()
also raisegTTSError
on gtts_token errorgTTS.write_to_fp()
raisesTypeError
whenfp
is not a file-like object or one that doesn't take bytesgTTS()
raisesValueError
on unsupported languages (andlang_check
isTrue
)- More fine-grained error handling throughout (e.g. request failed vs. request successful with a bad response)
- Added new exception
- New logger ("gtts") replaces all occurrences of
- Tokenizer (and new pre-processors):
- Rewrote and greatly expanded tokenizer (
gtts.tokenizer
) - Smarter token 'cleaning' that will remove tokens that only contain characters that can't be spoken (i.e. punctuation and whitespace)
- Decoupled token minimizing from tokenizing, making the latter usable in other contexts
- New flexible speech-centric text pre-processing
- New flexible full-featured regex-based tokenizer
(
gtts.tokenizer.core.Tokenizer
) - New
RegexBuilder
,PreProcessorRegex
andPreProcessorSub
classes to make writing regex-powered text pre-processors and tokenizer cases easier - Pre-processors:
- Re-form words cut by end-of-line hyphens
- Remove periods after a (customizable) list of known abbreviations (e.g. "jr", "sr", "dr") that can be spoken the same without a period
- Perform speech corrections by doing word-for-word replacements from a (customizable) list of tuples
- Tokenizing:
- Rewrote and greatly expanded tokenizer (
- The
gtts-cli
command-line tool- Rewrote cli as first-class citizen module (
gtts.cli
), powered by Click - Windows support using setuptool's entry_points
- Better support for Unicode I/O in Python 2
- All arguments are now pre-validated
- New
--nocheck
flag to skip language pre-checking - New
--all
flag to list all available languages - Either the
--file
option or the<text>
argument can be set to "-" to read fromstdin
- The
--debug
flag uses logging and doesn't pollutestdout
anymore
- Rewrote cli as first-class citizen module (
_minimize()
: Fixed an infinite recursion loop that would occur when a token started with the miminizing delimiter (i.e. a space) (#86)_minimize()
: Handle the case where a token of more than 100 characters did not contain a space (e.g. in Chinese).- Fixed an issue that fused multiline text together if the total number of characters was less than 100
- Fixed
gtts-cli
Unicode errors in Python 2.7 (famous last words) (#78, #93, #96)
- Dropped Python 3.3 support
- Removed
debug
parameter ofgTTS
(in favour of logger) gtts-cli
: Changed long option name of-o
to--output
instead of--destination
gTTS()
will raise aValueError
rather than anAssertionError
on unsupported language
- Rewrote all documentation files as reStructuredText
- Comprehensive documentation writen for Sphinx, published to http://gtts.readthedocs.io
- Changelog built with towncrier
- Major test re-work
- Language tests can read a
TEST_LANGS
enviromment variable so not all language tests are run every time. - Added AppVeyor CI for Windows
- PEP 8 compliance
- Update LICENCE, add to manifest (#77)
- Add Unicode punctuation to the tokenizer (such as for Chinese and Japanese) (#75)
- Option for slower read speed (
slow=True
forgTTS()
,--slow
forgtts-cli
) (#40, #41, #64, #67) - System proxy settings are passed transparently to all http requests (#45, #68)
- Silence SSL warnings from urllib3 (#69)
- The text to read is now cut in proper chunks in Python 2 unicode. This broke reading for many languages such as Russian.
- Disabled SSL verify on http requests to accommodate certain firewalls and proxies.
- Better Python 2/3 support in general (#9, #48, #68)
- 'pt-br' : 'Portuguese (Brazil)' (it was the same as 'pt' and not Brazilian) (#69)
- Added
stdin
support via the '-'text
argument togtts-cli
(#56)
- Added utf-8 support to
gtts-cli
(#52)
- 'ht' : 'Haitian Creole' (removed by Google) (#43)
- Spun-off token calculation to gTTS-Token (#23, #29)
- Moved out gTTS token to its own module (#19)
- Added
stdout
support togtts-cli
, text now an argument rather than an option (#10)
- Raise an exception on bad HTTP response (4xx or 5xx) (#8)
- Added
client=t
parameter for the api HTTP request (#8)
write_to_fp()
to write to a file-like object (#6)
- Added Languages: zh-yue : 'Chinese (Cantonese)', en-uk : 'English (United Kingdom)', pt-br : 'Portuguese (Brazil)', es-es : 'Spanish (Spain)', es-us : 'Spanish (United StateS)', zh-cn : 'Chinese (Mandarin/China)', zh-tw : 'Chinese (Mandarin/Taiwan)' (#4)
gtts-cli
print version and pretty printed available languages, language codes are now case insensitive (#4)
- Added Languages: 'en-us' : 'English (United States)', 'en-au' : 'English (Australia)' (#3)
- Python 3 support
- SemVer versioning, CI changes
- Initial release