Releases: janhq/cortex.cpp
Releases · janhq/cortex.cpp
0.5.0
Changes
- fix: should not return failed code on model already loaded by @louis-jan in https://github.com/janhq/cortex/pull/913
- refactor: cortex cli as client - communicate with API server via cortexjs by @louis-jan in https://github.com/janhq/cortex/pull/906
- chore: unsupported platform engine status by @louis-jan in https://github.com/janhq/cortex/pull/920
- chore: update default api server config by @louis-jan in https://github.com/janhq/cortex/pull/921
- fix: config remote engine by @marknguyen1302 in https://github.com/janhq/cortex/pull/922
- feat: add cortex-cpp version to log by @vansangpfiev in https://github.com/janhq/cortex/pull/912
- chore: group system apis by @marknguyen1302 in https://github.com/janhq/cortex/pull/924
- fix: transform anthropic response by @marknguyen1302 in https://github.com/janhq/cortex/pull/925
- chore: enhance error output by @louis-jan in https://github.com/janhq/cortex/pull/923
- Change from github hotsted to macos selfhosted by @hiento09 in https://github.com/janhq/cortex/pull/926
- chore: attempt to release running services on exit by @louis-jan in https://github.com/janhq/cortex/pull/927
- Update the api desc by @irfanpena in https://github.com/janhq/cortex/pull/933
- Fix winget PR issue by @hiento09 in https://github.com/janhq/cortex/pull/934
- fix: handle multi download model, uninstall script by @marknguyen1302 in https://github.com/janhq/cortex/pull/932
- Add exe to release for winget by @hiento09 in https://github.com/janhq/cortex/pull/935
- chore: inference chat preset support by @louis-jan in https://github.com/janhq/cortex/pull/931
- chore: add log downloading model by @marknguyen1302 in https://github.com/janhq/cortex/pull/936
- chore: fixed an issue where cortex chat reload the running models by @louis-jan in https://github.com/janhq/cortex/pull/937
- fix: correct progress multi download by @marknguyen1302 in https://github.com/janhq/cortex/pull/938
- chore: add model version output by @louis-jan in https://github.com/janhq/cortex/pull/939
- chore: correct output logs by @louis-jan in https://github.com/janhq/cortex/pull/940
Full Changelog: https://github.com/janhq/cortex/compare/v0.4.36...v0.5.0
0.4.37-7
Changes
- Fix winget PR issue @hiento09 (#934)
- Update the api desc @irfanpena (#933)
- chore: attempt to release running services on exit @louis-jan (#927)
- Change from github hotsted to macos selfhosted @hiento09 (#926)
- chore: enhance error output @louis-jan (#923)
- fix: transform anthropic response @marknguyen1302 (#925)
- chore: group system apis @marknguyen1302 (#924)
- feat: add cortex-cpp version to log @vansangpfiev (#912)
- fix: config remote engine @marknguyen1302 (#922)
- chore: update default api server config @louis-jan (#921)
- chore: unsupported platform engine status @louis-jan (#920)
- refactor: cortex cli as client - communicate with API server via cortexjs @louis-jan (#906)
- fix: should not return failed code on model already loaded @louis-jan (#913)
- fix: add engine in unload model request @marknguyen1302 (#916)
- Correct path of file source.changes @hiento09 (#915)
- Feat cortex publish launchpad @hiento09 (#910)
- fix: wrong condition to add engine param @marknguyen1302 (#914)
- fix: wrong remote engine payload @marknguyen1302 (#911)
- feat: add cortex version option @marknguyen1302 (#904)
- fix: electron build @louis-jan (#908)
- fix: can not chat with remote model @marknguyen1302 (#907)
- feat: update models command @marknguyen1302 (#905)
- remove --options max-old-space-size=30720,tls-min-v1.0,expose-gc pkg @hiento09 (#902)
- refactor: deprecate cortex configs @louis-jan (#901)
- feat: Create llama3.yml presets for llama3 family @Van-QA (#862)
- fix: allow duplicated options @marknguyen1302 (#903)
- chore: auto load model on /chat/completions request @louis-jan (#900)
- Remove serve command @marknguyen1302 (#896)
- fix: transfrom wrong format in json response from Anthropic @marknguyen1302 (#897)
- fix: download model wrong event emission @louis-jan (#895)
- chore: lock cortex-cpp dependency version @louis-jan (#894)
- feat: support running multiple engines at the same time @vansangpfiev (#891)
- fix: throttle leading and trailing event @louis-jan (#892)
- fix: wrong engine list @marknguyen1302 (#890)
- feat: add engine init endpoint @louis-jan (#888)
- feat: change init engine syntax @marknguyen1302 (#889)
- fix: wrong telemetry folder, correct swagger @marknguyen1302 (#887)
- Chore test openai api @hiento09 (#886)
- fix: add custom model id @marknguyen1302 (#882)
- fix: remote engines should accept normalized headers only @louis-jan (#885)
- add better example for the request body @irfanpena (#883)
- chore: add an entry to allow JS import @louis-jan (#878)
- fix: cortex-cpp node prebuild dependencies @louis-jan (#879)
- Update the README @irfanpena (#875)
- fix: chat completion endpoint is hang @louis-jan (#877)
- Fix openai collection test @hiento09 (#874)
- fix: create sqlite tables @marknguyen1302 (#873)
- chore: deprecate cortex init @louis-jan (#868)
- fix: use zlib static @vansangpfiev (#872)
- fix: correct consumed resource table @marknguyen1302 (#869)
- feat: add resources consumed in ps command @marknguyen1302 (#866)
- fix: clean resources should not remove engine file @louis-jan (#867)
- feat: support anthropic engine @marknguyen1302 (#864)
- fix: throw error when not found file in download model @marknguyen1302 (#865)
- feat: cortex-cpp node addon @louis-jan (#852)
- Remove the tap and untap brew installation command @irfanpena (#859)
- fix: correct telemetry timestamp @marknguyen1302 (#857)
- feat: support pull model by specific fileName @marknguyen1302 (#858)
- feat: replace typeorm by sequelize @marknguyen1302 (#856)
- cortex cpp switch to macos-latest runner @hiento09 (#854)
Contributor
@Van-QA, @hiento09, @hientominh, @irfanpena, @louis-jan, @marknguyen1302 and @vansangpfiev
0.4.37-2
Changes
- feat: add cortex-cpp version to log @vansangpfiev (#912)
- fix: config remote engine @marknguyen1302 (#922)
- chore: update default api server config @louis-jan (#921)
- chore: unsupported platform engine status @louis-jan (#920)
- refactor: cortex cli as client - communicate with API server via cortexjs @louis-jan (#906)
- fix: should not return failed code on model already loaded @louis-jan (#913)
- fix: add engine in unload model request @marknguyen1302 (#916)
- Correct path of file source.changes @hiento09 (#915)
- Feat cortex publish launchpad @hiento09 (#910)
- fix: wrong condition to add engine param @marknguyen1302 (#914)
- fix: wrong remote engine payload @marknguyen1302 (#911)
- feat: add cortex version option @marknguyen1302 (#904)
- fix: electron build @louis-jan (#908)
- fix: can not chat with remote model @marknguyen1302 (#907)
- feat: update models command @marknguyen1302 (#905)
- remove --options max-old-space-size=30720,tls-min-v1.0,expose-gc pkg @hiento09 (#902)
- refactor: deprecate cortex configs @louis-jan (#901)
- feat: Create llama3.yml presets for llama3 family @Van-QA (#862)
- fix: allow duplicated options @marknguyen1302 (#903)
- chore: auto load model on /chat/completions request @louis-jan (#900)
- Remove serve command @marknguyen1302 (#896)
- fix: transfrom wrong format in json response from Anthropic @marknguyen1302 (#897)
- fix: download model wrong event emission @louis-jan (#895)
- chore: lock cortex-cpp dependency version @louis-jan (#894)
- feat: support running multiple engines at the same time @vansangpfiev (#891)
- fix: throttle leading and trailing event @louis-jan (#892)
- fix: wrong engine list @marknguyen1302 (#890)
- feat: add engine init endpoint @louis-jan (#888)
- feat: change init engine syntax @marknguyen1302 (#889)
- fix: wrong telemetry folder, correct swagger @marknguyen1302 (#887)
- Chore test openai api @hiento09 (#886)
- fix: add custom model id @marknguyen1302 (#882)
- fix: remote engines should accept normalized headers only @louis-jan (#885)
- add better example for the request body @irfanpena (#883)
- chore: add an entry to allow JS import @louis-jan (#878)
- fix: cortex-cpp node prebuild dependencies @louis-jan (#879)
- Update the README @irfanpena (#875)
- fix: chat completion endpoint is hang @louis-jan (#877)
- Fix openai collection test @hiento09 (#874)
- fix: create sqlite tables @marknguyen1302 (#873)
- chore: deprecate cortex init @louis-jan (#868)
- fix: use zlib static @vansangpfiev (#872)
- fix: correct consumed resource table @marknguyen1302 (#869)
- feat: add resources consumed in ps command @marknguyen1302 (#866)
- fix: clean resources should not remove engine file @louis-jan (#867)
- feat: support anthropic engine @marknguyen1302 (#864)
- fix: throw error when not found file in download model @marknguyen1302 (#865)
- feat: cortex-cpp node addon @louis-jan (#852)
- Remove the tap and untap brew installation command @irfanpena (#859)
- fix: correct telemetry timestamp @marknguyen1302 (#857)
- feat: support pull model by specific fileName @marknguyen1302 (#858)
- feat: replace typeorm by sequelize @marknguyen1302 (#856)
- cortex cpp switch to macos-latest runner @hiento09 (#854)
Contributor
@Van-QA, @hiento09, @hientominh, @irfanpena, @louis-jan, @marknguyen1302 and @vansangpfiev
0.4.37
Changes
- fix: config remote engine @marknguyen1302 (#922)
- chore: update default api server config @louis-jan (#921)
- chore: unsupported platform engine status @louis-jan (#920)
- refactor: cortex cli as client - communicate with API server via cortexjs @louis-jan (#906)
- fix: should not return failed code on model already loaded @louis-jan (#913)
- fix: add engine in unload model request @marknguyen1302 (#916)
- Correct path of file source.changes @hiento09 (#915)
- Feat cortex publish launchpad @hiento09 (#910)
- fix: wrong condition to add engine param @marknguyen1302 (#914)
- fix: wrong remote engine payload @marknguyen1302 (#911)
- feat: add cortex version option @marknguyen1302 (#904)
- fix: electron build @louis-jan (#908)
- fix: can not chat with remote model @marknguyen1302 (#907)
- feat: update models command @marknguyen1302 (#905)
- remove --options max-old-space-size=30720,tls-min-v1.0,expose-gc pkg @hiento09 (#902)
- refactor: deprecate cortex configs @louis-jan (#901)
- feat: Create llama3.yml presets for llama3 family @Van-QA (#862)
- fix: allow duplicated options @marknguyen1302 (#903)
- chore: auto load model on /chat/completions request @louis-jan (#900)
- Remove serve command @marknguyen1302 (#896)
- fix: transfrom wrong format in json response from Anthropic @marknguyen1302 (#897)
- fix: download model wrong event emission @louis-jan (#895)
- chore: lock cortex-cpp dependency version @louis-jan (#894)
- feat: support running multiple engines at the same time @vansangpfiev (#891)
- fix: throttle leading and trailing event @louis-jan (#892)
- fix: wrong engine list @marknguyen1302 (#890)
- feat: add engine init endpoint @louis-jan (#888)
- feat: change init engine syntax @marknguyen1302 (#889)
- fix: wrong telemetry folder, correct swagger @marknguyen1302 (#887)
- Chore test openai api @hiento09 (#886)
- fix: add custom model id @marknguyen1302 (#882)
- fix: remote engines should accept normalized headers only @louis-jan (#885)
- add better example for the request body @irfanpena (#883)
- chore: add an entry to allow JS import @louis-jan (#878)
- fix: cortex-cpp node prebuild dependencies @louis-jan (#879)
- Update the README @irfanpena (#875)
- fix: chat completion endpoint is hang @louis-jan (#877)
- Fix openai collection test @hiento09 (#874)
- fix: create sqlite tables @marknguyen1302 (#873)
- chore: deprecate cortex init @louis-jan (#868)
- fix: use zlib static @vansangpfiev (#872)
- fix: correct consumed resource table @marknguyen1302 (#869)
- feat: add resources consumed in ps command @marknguyen1302 (#866)
- fix: clean resources should not remove engine file @louis-jan (#867)
- feat: support anthropic engine @marknguyen1302 (#864)
- fix: throw error when not found file in download model @marknguyen1302 (#865)
- feat: cortex-cpp node addon @louis-jan (#852)
- Remove the tap and untap brew installation command @irfanpena (#859)
- fix: correct telemetry timestamp @marknguyen1302 (#857)
- feat: support pull model by specific fileName @marknguyen1302 (#858)
- feat: replace typeorm by sequelize @marknguyen1302 (#856)
- cortex cpp switch to macos-latest runner @hiento09 (#854)
Contributor
@Van-QA, @hiento09, @irfanpena, @louis-jan, @marknguyen1302 and @vansangpfiev
0.4.36
v0.4.36 fix: add engine to unload model request
0.4.35
v0.4.35 fix: wrong condition to add engine param
0.4.32-6
Changes
- feat: add cortex version option @marknguyen1302 (#904)
- fix: electron build @louis-jan (#908)
- fix: can not chat with remote model @marknguyen1302 (#907)
- feat: update models command @marknguyen1302 (#905)
- remove --options max-old-space-size=30720,tls-min-v1.0,expose-gc pkg @hiento09 (#902)
- refactor: deprecate cortex configs @louis-jan (#901)
- feat: Create llama3.yml presets for llama3 family @Van-QA (#862)
- fix: allow duplicated options @marknguyen1302 (#903)
- chore: auto load model on /chat/completions request @louis-jan (#900)
- Remove serve command @marknguyen1302 (#896)
- fix: transfrom wrong format in json response from Anthropic @marknguyen1302 (#897)
- fix: download model wrong event emission @louis-jan (#895)
- chore: lock cortex-cpp dependency version @louis-jan (#894)
- feat: support running multiple engines at the same time @vansangpfiev (#891)
- fix: throttle leading and trailing event @louis-jan (#892)
- fix: wrong engine list @marknguyen1302 (#890)
- feat: add engine init endpoint @louis-jan (#888)
- feat: change init engine syntax @marknguyen1302 (#889)
- fix: wrong telemetry folder, correct swagger @marknguyen1302 (#887)
- Chore test openai api @hiento09 (#886)
- fix: add custom model id @marknguyen1302 (#882)
- fix: remote engines should accept normalized headers only @louis-jan (#885)
- add better example for the request body @irfanpena (#883)
- chore: add an entry to allow JS import @louis-jan (#878)
- fix: cortex-cpp node prebuild dependencies @louis-jan (#879)
- Update the README @irfanpena (#875)
- fix: chat completion endpoint is hang @louis-jan (#877)
- Fix openai collection test @hiento09 (#874)
- fix: create sqlite tables @marknguyen1302 (#873)
- chore: deprecate cortex init @louis-jan (#868)
- fix: use zlib static @vansangpfiev (#872)
- fix: correct consumed resource table @marknguyen1302 (#869)
- feat: add resources consumed in ps command @marknguyen1302 (#866)
- fix: clean resources should not remove engine file @louis-jan (#867)
- feat: support anthropic engine @marknguyen1302 (#864)
- fix: throw error when not found file in download model @marknguyen1302 (#865)
- feat: cortex-cpp node addon @louis-jan (#852)
- Remove the tap and untap brew installation command @irfanpena (#859)
- fix: correct telemetry timestamp @marknguyen1302 (#857)
- feat: support pull model by specific fileName @marknguyen1302 (#858)
- feat: replace typeorm by sequelize @marknguyen1302 (#856)
- cortex cpp switch to macos-latest runner @hiento09 (#854)
Contributor
@Van-QA, @hiento09, @hientominh, @irfanpena, @louis-jan, @marknguyen1302 and @vansangpfiev
0.4.34
NOTE: NPM dependency - do not remove yet
0.4.33
Changes
- Remove serve command @marknguyen1302 (#896)
- fix: transfrom wrong format in json response from Anthropic @marknguyen1302 (#897)
- fix: download model wrong event emission @louis-jan (#895)
- chore: lock cortex-cpp dependency version @louis-jan (#894)
- feat: support running multiple engines at the same time @vansangpfiev (#891)
- fix: throttle leading and trailing event @louis-jan (#892)
- fix: wrong engine list @marknguyen1302 (#890)
- feat: add engine init endpoint @louis-jan (#888)
- feat: change init engine syntax @marknguyen1302 (#889)
- fix: wrong telemetry folder, correct swagger @marknguyen1302 (#887)
- Chore test openai api @hiento09 (#886)
- fix: add custom model id @marknguyen1302 (#882)
- fix: remote engines should accept normalized headers only @louis-jan (#885)
- add better example for the request body @irfanpena (#883)
- chore: add an entry to allow JS import @louis-jan (#878)
- fix: cortex-cpp node prebuild dependencies @louis-jan (#879)
- Update the README @irfanpena (#875)
- fix: chat completion endpoint is hang @louis-jan (#877)
- Fix openai collection test @hiento09 (#874)
- fix: create sqlite tables @marknguyen1302 (#873)
- chore: deprecate cortex init @louis-jan (#868)
- fix: use zlib static @vansangpfiev (#872)
- fix: correct consumed resource table @marknguyen1302 (#869)
- feat: add resources consumed in ps command @marknguyen1302 (#866)
- fix: clean resources should not remove engine file @louis-jan (#867)
- feat: support anthropic engine @marknguyen1302 (#864)
- fix: throw error when not found file in download model @marknguyen1302 (#865)
- feat: cortex-cpp node addon @louis-jan (#852)
- Remove the tap and untap brew installation command @irfanpena (#859)
- fix: correct telemetry timestamp @marknguyen1302 (#857)
- feat: support pull model by specific fileName @marknguyen1302 (#858)
- feat: replace typeorm by sequelize @marknguyen1302 (#856)
- cortex cpp switch to macos-latest runner @hiento09 (#854)
Contributor
@hiento09, @hientominh, @irfanpena, @louis-jan, @marknguyen1302 and @vansangpfiev
0.4.30
Changes
- Remove serve command @marknguyen1302 (#896)
- fix: transfrom wrong format in json response from Anthropic @marknguyen1302 (#897)
- fix: download model wrong event emission @louis-jan (#895)
- chore: lock cortex-cpp dependency version @louis-jan (#894)
- feat: support running multiple engines at the same time @vansangpfiev (#891)
- fix: throttle leading and trailing event @louis-jan (#892)
- fix: wrong engine list @marknguyen1302 (#890)
- feat: add engine init endpoint @louis-jan (#888)
- feat: change init engine syntax @marknguyen1302 (#889)
- fix: wrong telemetry folder, correct swagger @marknguyen1302 (#887)
- Chore test openai api @hiento09 (#886)
- fix: add custom model id @marknguyen1302 (#882)
- fix: remote engines should accept normalized headers only @louis-jan (#885)
- add better example for the request body @irfanpena (#883)
- chore: add an entry to allow JS import @louis-jan (#878)
- fix: cortex-cpp node prebuild dependencies @louis-jan (#879)
- Update the README @irfanpena (#875)
- fix: chat completion endpoint is hang @louis-jan (#877)
- Fix openai collection test @hiento09 (#874)
- fix: create sqlite tables @marknguyen1302 (#873)
- chore: deprecate cortex init @louis-jan (#868)
- fix: use zlib static @vansangpfiev (#872)
- fix: correct consumed resource table @marknguyen1302 (#869)
- feat: add resources consumed in ps command @marknguyen1302 (#866)
- fix: clean resources should not remove engine file @louis-jan (#867)
- feat: support anthropic engine @marknguyen1302 (#864)
- fix: throw error when not found file in download model @marknguyen1302 (#865)
- feat: cortex-cpp node addon @louis-jan (#852)
- Remove the tap and untap brew installation command @irfanpena (#859)
- fix: correct telemetry timestamp @marknguyen1302 (#857)
- feat: support pull model by specific fileName @marknguyen1302 (#858)
- feat: replace typeorm by sequelize @marknguyen1302 (#856)
- cortex cpp switch to macos-latest runner @hiento09 (#854)
Contributor
@hiento09, @hientominh, @irfanpena, @louis-jan, @marknguyen1302 and @vansangpfiev