Skip to main content
warning

🚧 Cortex.cpp is currently under development. Our documentation outlines the intended behavior of Cortex, which may not yet be fully implemented in the codebase.

cortex models

This command allows you to start, stop, and manage various local or remote model operations within Cortex.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models [options] [subcommand]
# Beta
cortex-beta models [options] [subcommand]
# Nightly
cortex-nightly models [options] [subcommand]

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help information for the command.No--h

cortex models get​

info

This CLI command calls the following API endpoint:

This command returns a model detail defined by a model_id.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models get <model_id>
# Beta
cortex-beta models get <model_id>
# Nightly
cortex-nightly models get <model_id>

For example, it returns the following:


ModelConfig Details:
-------------------
id: tinyllama
name: tinyllama 1B
model: tinyllama:1B
version: 1
stop: [</s>]
top_p: 0.95
temperature: 0.7
frequency_penalty: 0
presence_penalty: 0
max_tokens: 4096
stream: true
ngl: 33
ctx_len: 4096
engine: llamacpp
prompt_template:
<|system|>
{system_message}</s>
<|user|>
{prompt}</s>
<|assistant|>
system_template:
<|system|>
user_template: </s>
<|user|>
ai_template: </s>
<|assistant|>
tp: 0
text_model: false
files: [model_path]
created: 1725342964

info

This command uses a model_id from the model that you have downloaded or available in your file system.

Options:

OptionDescriptionRequiredDefault valueExample
model_idThe identifier of the model you want to retrieve.Yes-mistral
-h, --helpDisplay help information for the command.No--h

cortex models list​

info

This CLI command calls the following API endpoint:

This command lists all the downloaded local and remote models.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models list [options]
# Beta
cortex-beta models list [options]
# Nightly
cortex-nightly models list [options]

For example, it returns the following:


+---------+----------------+-----------------+---------+
| (Index) | ID | engine | version |
+---------+----------------+-----------------+---------+
| 1 | tinyllama-gguf | llamacpp | 1 |
+---------+----------------+-----------------+---------+
| 2 | tinyllama | llamacpp | 1 |
+---------+----------------+-----------------+---------+

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help for command.No--h

cortex models start​

info

This CLI command calls the following API endpoint:

This command starts a model defined by a model_id.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models start [options] <model_id>
# Beta
cortex-beta models start [options] <model_id>
# Nightly
cortex-nightly models start [options] <model_id>

info

This command uses a model_id from the model that you have downloaded or available in your file system.

Options:

OptionDescriptionRequiredDefault valueExample
model_idThe identifier of the model you want to start.YesPrompt to select from the available modelsmistral
-h, --helpDisplay help information for the command.No--h

cortex models stop​

info

This CLI command calls the following API endpoint:

This command stops a model defined by a model_id.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models stop <model_id>
# Beta
cortex-beta models stop <model_id>
# Nightly
cortex-nightly models stop <model_id>

info

This command uses a model_id from the model that you have started before.

Options:

OptionDescriptionRequiredDefault valueExample
model_idThe identifier of the model you want to stop.Yes-mistral
-h, --helpDisplay help information for the command.No--h

cortex models delete​

info

This CLI command calls the following API endpoint:

This command deletes a local model defined by a model_id.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models delete <model_id>
# Beta
cortex-beta models delete <model_id>
# Nightly
cortex-nightly models delete <model_id>

info

This command uses a model_id from the model that you have downloaded or available in your file system.

Options:

OptionDescriptionRequiredDefault valueExample
model_idThe identifier of the model you want to delete.Yes-mistral
-h, --helpDisplay help for command.No--h

cortex models alias​

This command adds an alias to a local model that function the same as model_id.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models alias --model_id <model_id> --alias <new_model_id_or_model_alias>
# Beta
cortex-beta models alias --model_id <model_id> --alias <new_model_id_or_model_alias>
# Nightly
cortex-nightly models alias --model_id <model_id> --alias <new_model_id_or_model_alias>

Options:

OptionDescriptionRequiredDefault valueExample
--model_idThe identifier of the model.Yes-mistral
-aliasThe new identifier for the model.Yes-mistral_2

cortex models update​

This command updates the model.yaml file of a local model.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models update [options]
# Beta
cortex-beta models update [options]
# Nightly
cortex-nightly models update [options]

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help for command.No--h
--model_id REQUIREDUnique identifier for the model.Yes---model_id my_model
--nameName of the model.No---name "GPT Model"
--modelModel type or architecture.No---model GPT-4
--versionVersion of the model to use.No---version 1.2.0
--stopStop token to terminate generation.No---stop "</s>"
--top_pSampling parameter for nucleus sampling.No---top_p 0.9
--temperatureControls randomness in generation.No---temperature 0.8
--frequency_penaltyPenalizes repeated tokens based on frequency.No---frequency_penalty 0.5
--presence_penaltyPenalizes repeated tokens based on presence.No0.0--presence_penalty 0.6
--max_tokensMaximum number of tokens to generate.No---max_tokens 1500
--streamStream output tokens as they are generated.Nofalse--stream true
--nglNumber of generations in parallel.No---ngl 4
--ctx_lenMaximum context length in tokens.No---ctx_len 1024
--engineCompute engine for running the model.No---engine CUDA
--prompt_templateTemplate for the prompt structure.No---prompt_template "###"
--system_templateTemplate for system-level instructions.No---system_template "SYSTEM"
--user_templateTemplate for user inputs.No---user_template "USER"
--ai_templateTemplate for AI responses.No---ai_template "ASSISTANT"
--osOperating system environment.No---os Ubuntu
--gpu_archGPU architecture specification.No---gpu_arch A100
--quantization_methodQuantization method for model weights.No---quantization_method int8
--precisionFloating point precision for computations.Nofloat32--precision float16
--tpTensor parallelism.No---tp 4
--trtllm_versionVersion of the TRTLLM library.No---trtllm_version 2.0
--text_modelThe model used for text generation.No---text_model llama2
--filesFile path or resources associated with the model.No---files config.json
--createdCreation date of the model.No---created 2024-01-01
--objectThe object type (e.g., model or file).No---object model
--owned_byThe owner or creator of the model.No---owned_by "Company"
--seedSeed for random number generation.No---seed 42
--dynatemp_rangeRange for dynamic temperature scaling.No---dynatemp_range 0.7-1.0
--dynatemp_exponentExponent for dynamic temperature scaling.No---dynatemp_exponent 1.2
--top_kTop K sampling to limit token selection.No---top_k 50
--min_pMinimum probability threshold for tokens.No---min_p 0.1
--tfs_zToken frequency selection scaling factor.No---tfs_z 0.5
--typ_pTypicality-based token selection probability.No---typ_p 0.9
--repeat_last_nNumber of last tokens to consider for repetition penalty.No---repeat_last_n 64
--repeat_penaltyPenalty for repeating tokens.No---repeat_penalty 1.2
--mirostatMirostat sampling method for stable generation.No---mirostat 1
--mirostat_tauTarget entropy for Mirostat.No---mirostat_tau 5.0
--mirostat_etaLearning rate for Mirostat.No---mirostat_eta 0.1
--penalize_nlPenalize new lines in generation.Nofalse--penalize_nl true
--ignore_eosIgnore the end of sequence token.Nofalse--ignore_eos true
--n_probsNumber of probability outputs to return.No---n_probs 5

cortex models import​

This command imports the local model using the model's gguf file.

Usage:

info

You can use the --verbose flag to display more detailed output of the internal processes. To apply this flag, use the following format: cortex --verbose [subcommand].


# Stable
cortex models import --model_id <model_id> --model_path </path/to/your/model.gguf>
# Beta
cortex-beta models import --model_id <model_id> --model_path </path/to/your/model.gguf>
# Nightly
cortex-nightly models import --model_id <model_id> --model_path </path/to/your/model.gguf>

Options:

OptionDescriptionRequiredDefault valueExample
-h, --helpDisplay help for command.No--h
--model_idThe identifier of the model.Yes-mistral
--model_pathThe path of the model source file.Yes-/path/to/your/model.gguf