Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run Multiple Models all together. #365

Closed
1 task done
amankumarchagti opened this issue Apr 23, 2024 · 10 comments
Closed
1 task done

Run Multiple Models all together. #365

amankumarchagti opened this issue Apr 23, 2024 · 10 comments
Labels
question Further information is requested

Comments

@amankumarchagti
Copy link

Search before asking

  • I have searched the Inference issues and found no similar feature requests.

Question

Hi, how can I run multiple models together in following code? The model are public models on roboflow, not created by me.

Additional

No response

@amankumarchagti amankumarchagti added the question Further information is requested label Apr 23, 2024
@grzegorz-roboflow
Copy link
Contributor

Hi @amankumarchagti , when you mention following code do you mean code snippet you was intending to include in the question?

@amankumarchagti
Copy link
Author

Hi @grzegorz-roboflow , apologies for that. Following is the code:

# import the InferencePipeline interface
from inference import InferencePipeline
# import a built in sink called render_boxes (sinks are the logic that happens after inference)
from inference.core.interfaces.stream.sinks import render_boxes

# create an inference pipeline object
pipeline = InferencePipeline.init(
    model_id="cow-lie-stand-walk/2", # set the model id to a yolov8x model with in put size 1280
    video_reference="rtsp://192.168.1.100:5543/live/channel0", # set the video reference (source of video), it can be a link/path to a video file, an RTSP stream url, or an integer representing a device id (usually 0 for built in webcams)
    on_prediction=render_boxes, # tell the pipeline object what to do with each set of inference by passing a function
    api_key="<API-KEY>", # provide your roboflow api key for loading models from the roboflow api
)
# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()

@grzegorz-roboflow
Copy link
Contributor

Hi @amankumarchagti, we are finalizing release that will include major refactor of workflows - this functionality will enable you to run multiple models in single pipeline.

If you want to play with it you can check #343

With this PR below can be done:

import os

from inference.enterprise.workflows.execution_engine.core import ExecutionEngine
from inference.core.managers.base import ModelManager
from inference.core.registries.roboflow import (
    RoboflowModelRegistry,
)
from inference.models.utils import ROBOFLOW_MODEL_TYPES


model_registry = RoboflowModelRegistry(ROBOFLOW_MODEL_TYPES)
model_manager = ModelManager(model_registry=model_registry)


WORKFLOW = {
        "version": "1.0",
        "inputs": [
            { "type": "InferenceImage", "name": "image"},
        ],
        "steps": [
            {
                "type": "ObjectDetectionModel",
                "name": "m1",
                "image": "$inputs.image",
                "model_id": "chess-pieces-and-chess-board-instance-segmentation/1",
            },
            {
                "type": "ObjectDetectionModel",
                "name": "m2",
                "image": "$inputs.image",
                "model_id": "chess-pieces-and-chess-board-instance-segmentation/1",
            }
        ],
        "outputs": [
            { "type": "JsonField", "name": "m1preds", "selector": "$steps.m1.predictions" },
            { "type": "JsonField", "name": "m2preds", "selector": "$steps.m2.predictions" }
        ]
    }

execution_engine = ExecutionEngine.init(
    workflow_definition=WORKFLOW,
    init_parameters={
        "workflows_core.model_manager": model_manager,
        "api_key": os.getenv("ROBOFLOW_API_KEY")
    },
)

result = execution_engine.run(
    runtime_parameters={
        "image": {"type": "file", "value": "/path/to/image.jpg"},
        "confidence": 0.8,
    }
)

print(result.keys())
# ['m1preds', 'm2preds']

Hope this answers your question.

@amankumarchagti
Copy link
Author

amankumarchagti commented May 9, 2024

I think the release is done right?

@grzegorz-roboflow
Copy link
Contributor

grzegorz-roboflow commented May 9, 2024

Absolutely, the code from above example will now work on the main branch. Bare in mind I was testing with my model I trained on my personal Roboflow account, you will probably need to use your models.

@grzegorz-roboflow
Copy link
Contributor

@amankumarchagti, I hope you managed to achieve your use case with workflows. I will close this issue, please feel free to create new issue if you have further questions about workflows.

@john09282922
Copy link

Hi, I also have an important question
Instead of roboflow model, Is it possible to use my model like I pretrained model with yolov8 or 9 or 10?

Thanks,
Jungmin

@grzegorz-roboflow
Copy link
Contributor

Hi @john09282922 , currently in order to use your own model you need to create a dataset in Roboflow app workspace, once dataset is created you are presented with option Custom Train and Upload which allows you to upload your weights.

@john09282922
Copy link

Hi @john09282922 , currently in order to use your own model you need to create a dataset in Roboflow app workspace, once dataset is created you are presented with option Custom Train and Upload which allows you to upload your weights.

Thanks for the information, Instead of the method you mentioned, is it possible to use my model? Because my model has several models with trained lots of dataset.

@yeldarby
Copy link
Contributor

Yes, you can implement your own registry to load your model from elsewhere.

Note that it will need to conform to the same spec (or you will need to implement your own model class as well).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants