Title: | Interact with Large Language Models in 'RStudio' |
Version: | 0.3.0 |
Description: | Enables user interactivity with large-language models ('LLM') inside the 'RStudio' integrated development environment (IDE). The user can interact with the model using the 'shiny' app included in this package, or directly in the 'R' console. It comes with back-ends for 'OpenAI', 'GitHub' 'Copilot', and 'LlamaGPT'. |
URL: | https://github.com/mlverse/chattr, https://mlverse.github.io/chattr/ |
BugReports: | https://github.com/mlverse/chattr/issues |
License: | MIT + file LICENSE |
Encoding: | UTF-8 |
RoxygenNote: | 7.3.2 |
Imports: | rstudioapi, lifecycle, processx, config, ellmer (≥ 0.2.0), purrr, rlang, bslib, shiny, clipr, httr2 (≥ 1.1.0), yaml, glue, coro, cli, fs |
Depends: | R (≥ 2.10) |
Suggests: | covr, knitr, rmarkdown, testthat (≥ 3.0.0), shinytest2, withr |
Config/testthat/edition: | 3 |
VignetteBuilder: | knitr |
NeedsCompilation: | no |
Packaged: | 2025-05-28 18:14:47 UTC; edgar |
Author: | Edgar Ruiz [aut, cre], Posit Software, PBC [cph, fnd] |
Maintainer: | Edgar Ruiz <edgar@posit.co> |
Repository: | CRAN |
Date/Publication: | 2025-05-28 18:30:02 UTC |
Displays and sets the current session' chat history
Description
Displays and sets the current session' chat history
Usage
ch_history(x = NULL)
Arguments
x |
An list object that contains chat history. Use this argument to override the current history. |
Value
A list object with the current chat history
Examples
library(chattr)
chattr_use("test", stream = FALSE)
chattr("hello")
# View history
ch_history()
# Save history to a file
chat_file <- tempfile()
saveRDS(ch_history(), chat_file)
# Reset history
ch_history(list())
# Re-load history
ch_history(readRDS(chat_file))
Method to integrate to new LLM API's
Description
Method to integrate to new LLM API's
Usage
ch_submit(
defaults,
prompt = NULL,
stream = NULL,
prompt_build = TRUE,
preview = FALSE,
...
)
Arguments
defaults |
Defaults object, generally puled from |
prompt |
The prompt to send to the LLM |
stream |
To output the response from the LLM as it happens, or wait until the response is complete. Defaults to TRUE. |
prompt_build |
Include the context and additional prompt as part of the request |
preview |
Primarily used for debugging. It indicates if it should send the prompt to the LLM (FALSE), or if it should print out the resulting prompt (TRUE) |
... |
Optional arguments; currently unused. |
Value
The output from the model currently in use.
Submits prompt to LLM
Description
Submits prompt to LLM
Usage
chattr(prompt = NULL, preview = FALSE, prompt_build = TRUE, stream = NULL)
Arguments
prompt |
Request to send to LLM. Defaults to NULL |
preview |
Primarily used for debugging. It indicates if it should send the prompt to the LLM (FALSE), or if it should print out the resulting prompt (TRUE) |
prompt_build |
Include the context and additional prompt as part of the request |
stream |
To output the response from the LLM as it happens, or wait until the response is complete. Defaults to TRUE. |
Value
The output of the LLM to the console, document or script.
Examples
library(chattr)
chattr_use("test")
chattr("hello")
chattr("hello", preview = TRUE)
Starts a Shiny app interface to the LLM
Description
Starts a Shiny app interface to the LLM
Usage
chattr_app(
viewer = c("viewer", "dialog"),
as_job = getOption("chattr.as_job", FALSE),
as_job_port = getOption("shiny.port", 7788),
as_job_host = getOption("shiny.host", "127.0.0.1")
)
Arguments
viewer |
Specifies where the Shiny app is going to display |
as_job |
App runs as an RStudio IDE Job. Defaults to FALSE. If set to TRUE, the Shiny app will not be able to transfer the code blocks directly to the document, or console, in the IDE. |
as_job_port |
Port to use for the Shiny app. Applicable only if |
as_job_host |
Host IP to use for the Shiny app. Applicable only if
|
Value
A chat interface inside the 'RStudio' IDE
Default arguments to use when making requests to the LLM
Description
Default arguments to use when making requests to the LLM
Usage
chattr_defaults(
type = "default",
prompt = NULL,
max_data_files = NULL,
max_data_frames = NULL,
include_doc_contents = NULL,
include_history = NULL,
provider = NULL,
path = NULL,
model = NULL,
model_arguments = NULL,
system_msg = NULL,
yaml_file = "chattr.yml",
force = FALSE,
label = NULL,
...
)
Arguments
type |
Entry point to interact with the model. Accepted values: 'notebook', chat' |
prompt |
Request to send to LLM. Defaults to NULL |
max_data_files |
Sets the maximum number of data files to send to the model. It defaults to 20. To send all, set to NULL |
max_data_frames |
Sets the maximum number of data frames loaded in the current R session to send to the model. It defaults to 20. To send all, set to NULL |
include_doc_contents |
Send the current code in the document |
include_history |
Indicates whether to include the chat history when every time a new prompt is submitted |
provider |
The name of the provider of the LLM. Today, only "openai" is is available |
path |
The location of the model. It could be an URL or a file path. |
model |
The name or path to the model to use. |
model_arguments |
Additional arguments to pass to the model as part of the request, it requires a list. Examples of arguments: temperature, top_p, max_tokens |
system_msg |
For OpenAI GPT 3.5 or above, the system message to send as part of the request |
yaml_file |
The path to a valid |
force |
Re-process the base and any work space level file defaults |
label |
Label to display in the Shiny app, and other locations |
... |
Additional model arguments that are not standard for all models/backends |
Details
The idea is that because we will use addin shortcut to execute the
request, all of the other arguments can be controlled via this function. By
default, it will try to load defaults from a config
YAML file, if none are
found, then the defaults for GPT 3.5 will be used. The defaults can be
modified by calling this function, even after the interactive session has
started.
Value
An 'ch_model' object that contains the current defaults that will be used to communicate with the LLM.
Saves the current defaults in a yaml file that is compatible with the config package
Description
Saves the current defaults in a yaml file that is compatible with the config package
Usage
chattr_defaults_save(path = "chattr.yml", overwrite = FALSE, type = NULL)
Arguments
path |
Path to the file to save the configuration to |
overwrite |
Indicates to replace the file if it exists |
type |
The type of UI to save the defaults for. It defaults to NULL which will save whatever types had been used during the current R session |
Value
It creates a YAML file with the defaults set in the current R session.
Confirms connectivity to LLM interface
Description
Confirms connectivity to LLM interface
Usage
chattr_test(defaults = NULL)
ch_test(defaults = NULL)
Arguments
defaults |
Defaults object, generally puled from |
Value
It returns console massages with the status of the test.
Sets the LLM model to use in your session
Description
Sets the LLM model to use in your session
Usage
chattr_use(x = NULL, ...)
Arguments
x |
A pre-determined provider/model name, an |
... |
Default values to modify. |
Details
The valid pre-determined provider/models values are: 'databricks-dbrx', 'databricks-meta-llama31-70b', 'databricks-mixtral8x7b', 'gpt41-mini', 'gpt41-nano', 'gpt41', 'gpt4o', and 'ollama'.
If you need a provider, or model, not available as a pre-determined value,
create an ellmer
chat object and pass that to chattr_use()
. The list of
valid models are found here: https://ellmer.tidyverse.org/index.html#providers
Set a default
You can setup an R option
to designate a default provider/model connection.
To do this, pass an ellmer
connection command you wish to use
in the .chattr_chat
option, for example: options(.chattr_chat = ellmer::chat_anthropic())
.
If you add that code to your .Rprofile, chattr
will use that as the default
model and settings to use every time you start an R session. Use the
usethis::edit_r_profile()
command to easily edit your .Rprofile.
Value
It returns console messages to allow the user select the model to use.
Examples
## Not run:
# Use a valid provider/model label
chattr_use("gpt41-mini")
# Pass an `ellmer` object
my_chat <- ellmer::chat_anthropic()
chattr_use(my_chat)
## End(Not run)