(container.token)
- tokenization - Tokenization
- detokenization - Detokenization
By giving a text input, generate a tokenized output of token IDs.
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.container.token.tokenization(
prompt="What is generative AI?", model="(adapter-route)"
)
# Handle response
print(res)| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
prompt |
str | ✔️ | Input text prompt to tokenize. | What is generative AI? |
model |
OptionalNullable[str] | ➖ | Routes the request to a specific adapter. | (adapter-route) |
retries |
Optional[utils.RetryConfig] | ➖ | Configuration to override the default retry behavior of the client. | |
server_url |
Optional[str] | ➖ | An optional server URL to use. | http://localhost:8080 |
models.ContainerTokenizationSuccess
| Error Type | Status Code | Content Type |
|---|---|---|
| models.SDKError | 4XX, 5XX | */* |
By giving a list of tokens, generate a detokenized output text string.
import os
from friendli import SyncFriendli
with SyncFriendli(
token=os.getenv("FRIENDLI_TOKEN", ""),
) as friendli:
res = friendli.container.token.detokenization(
tokens=[
128000,
3923,
374,
1803,
1413,
15592,
30,
],
model="(adapter-route)",
)
# Handle response
print(res)| Parameter | Type | Required | Description | Example |
|---|---|---|---|---|
tokens |
List[int] | ✔️ | A token sequence to detokenize. | [ 128000, 3923, 374, 1803, 1413, 15592, 30 ] |
model |
OptionalNullable[str] | ➖ | Routes the request to a specific adapter. | (adapter-route) |
retries |
Optional[utils.RetryConfig] | ➖ | Configuration to override the default retry behavior of the client. | |
server_url |
Optional[str] | ➖ | An optional server URL to use. | http://localhost:8080 |
models.ContainerDetokenizationSuccess
| Error Type | Status Code | Content Type |
|---|---|---|
| models.SDKError | 4XX, 5XX | */* |