Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error when trying to automate detection and transfer #159

Open
NikKaem opened this issue Jun 23, 2023 · 4 comments
Open

Error when trying to automate detection and transfer #159

NikKaem opened this issue Jun 23, 2023 · 4 comments

Comments

@NikKaem
Copy link

NikKaem commented Jun 23, 2023

I'm currently trying to build a script that automates the whole end-2-end process with BlendARMocap

Minimal Code looks like this:

import bpy
import os

video_path = "xxx"
smoothing_coeffcient = 10

bpy.ops.object.armature_human_metarig_add()
bpy.ops.pose.rigify_generate()

bpy.data.scenes["Scene"].cgtinker_mediapipe.mov_data_path = video_path
bpy.data.scenes["Scene"].cgtinker_mediapipe.key_frame_step = smoothing_coeffcient

bpy.ops.wm.cgt_feature_detection_operator()

bpy.data.scenes["Scene"].cgtinker_transfer.selected_driver_collection = bpy.data.collections["cgt_DRIVERS"]
bpy.data.scenes["Scene"].cgtinker_transfer.selected_rig = bpy.data.objects["rig"]

bpy.ops.button.cgt_object_apply_properties()

This works if I execute the terminal command blender -P Script.py. However, as soon as I try to keep blender in the background with blender -b -P Script.py the detection part fails with the error RuntimeError: expected class WM_OT_cgt_feature_detection_operator, function cancel to return None, not set. I initially thought it was a problem with opening the detection window but even if I disable the drawing of the window the error persists. My ultimate goal is to run it all in a docker container which would be difficult if I need to show a frontend for it to work. Is there any workaround for that?

@cgtinker
Copy link
Owner

cgtinker commented Jun 23, 2023

I guess the issue is that bpy.ops.wm.cgt_feature_detection_operator is bound to blenders window manager (mainly to show realtime updates) and while running blender headless this results in an error.

BlendArMocap/src/cgt_mediapipe/cgt_mp_detection_operator.py
if you check the operator, you'll see it uses the window manager. You can just use the operator as baseline to create your own operator, remove the modal, window manger stuff and just use a while loop in the execute method.

If you have an issue with the opencv image display (idk if that's an issue in the docker as there is no window manager), you'll find a get stream method in the operator. BlendArMocap/src/cgt_mediapipe/cgt_mp_core/cv_stream.py
-> You'll probably just overwrite the streams 'draw' method so it just passes on draw and should be good in this case.

So either overwrite in the add-ons operator directly or create an own operator :)

Quick outline:

class DOCKER_CGT_MP_modal_detection_operator(bpy.types.Operator):
    # register the new operator with a custom id
    ...

    def get_chain(self, stream) -> cgt_nodes.NodeChain:
        ...
       return chain

    def get_stream(self):
        # overwrite the stream if necessary
        return stream

    def execute(self, context):
        # here are some changes
        self.user = context.scene.cgtinker_mediapipe  # noqa

        # init stream and chain
        stream = self.get_stream()
        self.node_chain = self.get_chain(stream)
        if self.node_chain is None:
            self.user.modal_active = False
            return {'FINISHED'}

        # memo skipped frames
        self.memo = []
        while self. overwritten_modal(context) == {'PASS_THROUGH'}:
              pass
        return {'OPERATOR_FINISHED'}

    @staticmethod
    def simple_smoothing(memo, cur):
         ...
        return memo

    def overwritten_modal(self, context):
        """ Run detection as modal operation, finish with 'Q', 'ESC' or 'RIGHT MOUSE'. """
       if self.user.detection_input_type == 'movie':
           # get data
           data, _frame = self.node_chain.nodes[0].update([], self.frame)
           if data is None:
               return self.cancel(context)
           self.simple_smoothing(self.memo, data)
           if self.frame % self.key_step == 0:
               for node in self.node_chain.nodes[1:]:
                   node.update(self.memo, self.frame)
               self.memo.clear()

           self.frame += 1
        return {'PASS_THROUGH'}

    def cancel(self, context):
        """ Upon finishing detection clear the handlers. """
        del self.node_chain
        return {'FINISHED'}

@NikKaem
Copy link
Author

NikKaem commented Jul 7, 2023

I just found the time to adapt the operator and it works like a charm. Haven't build a Docker container out of it yet but I can fully run the end-2-end process from the terminal now. Thank you so much! Is that something you would be interested in having as well?

@cgtinker
Copy link
Owner

cgtinker commented Jul 7, 2023

You are welcome, glad it works!
Feel free to create a separate folder like src/docker with a short readme so others which might need to run detection in a docker can use your implementation swell :)

@Niko-shvets
Copy link

@NikKaem It will be great if you would sare your solution. Thank you!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants