steamship.plugin.outputs package#

Submodules#

steamship.plugin.outputs.block_and_tag_plugin_output module#

class steamship.plugin.outputs.block_and_tag_plugin_output.BlockAndTagPluginOutput(*, file: steamship.data.file.File.CreateRequest = None)[source]#

Bases: steamship.base.model.CamelModel

file: steamship.data.file.File.CreateRequest#

steamship.plugin.outputs.embedded_items_plugin_output module#

class steamship.plugin.outputs.embedded_items_plugin_output.EmbeddedItemsPluginOutput(*, embeddings: List[List[float]])[source]#

Bases: steamship.base.model.CamelModel

embeddings: List[List[float]]#

steamship.plugin.outputs.model_checkpoint module#

class steamship.plugin.outputs.model_checkpoint.ModelCheckpoint(client: steamship.client.steamship.Steamship, parent_directory: Optional[pathlib.Path] = None, handle: str = 'default', plugin_instance_id: str = None)[source]#

Bases: steamship.base.model.CamelModel

DEFAULT_HANDLE: ClassVar[str] = 'default'#

Represents the saved state of a trained PluginInstance.

archive_path_in_steamship(as_handle: Optional[str] = None) str[source]#

Returns the path to the checkpoint archive on Steamship.

On steamship, the checkpoint is archived in the Workspace’s PluginInstance bucket as: {plugin_instance_bucket}/{plugin_instance_id}/{checkpoint_handle}.zip

Here we only return the following path since the bucket is specified separately in the required Steamship API calls: {plugin_instance_id}/{checkpoint_handle}.zip

archive_path_on_disk() pathlib.Path[source]#

Returns the path to the checkpoint archive on disk.

On disk, the model checkpoint is the folder:

{parent_directory}/{checkpoint_handle}.zip

client: steamship.base.client.Client#
download_model_bundle() pathlib.Path[source]#

Download’s the model from Steamship and unzips to parent_directory

folder_path_on_disk() pathlib.Path[source]#

Returns the path to this checkpoint on the local disk.

On disk, the model checkpoint is the folder:

{parent_directory}/{checkpoint_handle}/

handle: str#
parent_directory: Optional[pathlib.Path]#
plugin_instance_id: str#
upload_model_bundle(set_as_default: bool = True)[source]#

Zips and uploads the Model to steamship

workspace: Optional[steamship.data.workspace.Workspace]#

steamship.plugin.outputs.raw_data_plugin_output module#

class steamship.plugin.outputs.raw_data_plugin_output.RawDataPluginOutput(base64string: str = None, string: str = None, _bytes: Union[bytes, io.BytesIO] = None, json: Any = None, mime_type: str = None, *, data: Optional[str] = None)[source]#

Bases: steamship.base.model.CamelModel

Represents mime-typed raw data (or a URL pointing to raw data) that can be returned to the engine.

As a few examples, you can return: - Raw text: RawDataPluginOutput(string=raw_text, MimeTypes.TXT) - Markdown text: RawDataPluginOutput(string=markdown_text, MimeTypes.MKD) - A PNG image: RawDataPluginOutput(bytes=png_bytes, MimeTypes.PNG) - A JSON-serializable Dataclass: RawDataPluginOutput(json=dataclass, MimeTypes.JSON) - Steamship Blocks: RawDataPluginOutput(json=file, MimeTypes.STEAMSHIP_BLOCK_JSON) - Data uploaded to a pre-signed URL: RawDataPluginOutput(url=presigned_url, MimeTypes.TXT)

The data field of this object will ALWAYS be Base64 encoded by the constructor. This ensures that the object is always trivially JSON-serializable over the wire, no matter what it contains.

The mimeType field of this object should always be filled in if known. The Steamship Engine makes use of it to proactively select defaults for handling the data returned.

data: Optional[str]#
mime_type: Optional[str]#
classmethod parse_obj(obj: Any) pydantic.main.BaseModel[source]#

steamship.plugin.outputs.train_plugin_output module#

class steamship.plugin.outputs.train_plugin_output.TrainPluginOutput(*, pluginInstanceId: str = None, archivePath: str = None, inferenceParams: dict = None, trainingProgress: dict = None, trainingResults: dict = None)[source]#

Bases: steamship.base.model.CamelModel

This is the object produced by a completed trainable operation, stored as the output field of a train task.

archive_path: str#
inference_params: dict#
plugin_instance_id: str#
training_progress: dict#
training_results: dict#

steamship.plugin.outputs.training_parameter_plugin_output module#

class steamship.plugin.outputs.training_parameter_plugin_output.TrainingParameterPluginOutput(*, machineType: Optional[str] = None, trainingEpochs: int = None, testingHoldoutPercent: float = None, testSplitSeed: int = None, trainingParams: Dict[str, Any] = None, inferenceParams: Dict[str, Any] = None, exportRequest: steamship.plugin.inputs.export_plugin_input.ExportPluginInput = None)[source]#

Bases: steamship.base.model.CamelModel

export_request: steamship.plugin.inputs.export_plugin_input.ExportPluginInput#
static from_input(input: steamship.plugin.inputs.training_parameter_plugin_input.TrainingParameterPluginInput) steamship.plugin.outputs.training_parameter_plugin_output.TrainingParameterPluginOutput[source]#
inference_params: Dict[str, Any]#
machine_type: Optional[str]#
classmethod parse_obj(obj: Any) pydantic.main.BaseModel[source]#
test_split_seed: int#
testing_holdout_percent: float#
training_epochs: int#
training_params: Dict[str, Any]#

Module contents#