Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: CrewAI-based flows with no extra openai #4683

Merged
merged 10 commits into from
Nov 18, 2024
Merged

fix: CrewAI-based flows with no extra openai #4683

merged 10 commits into from
Nov 18, 2024

Conversation

erichare
Copy link
Collaborator

This pull request creates LLM() and Tool() objects which are CrewAI compatible, based on the components in the Crew-Ai based flow (including openai api key, etc)

@ogabrielluiz this fixes the crewai issues in my experience - the root problem is that we were (are) attempting to use tools and agent LLMs that are not in the format that Crew AI expects. This PR attempts to build the crewai versions from the specifications of the input tool and LLM. It's not perfect, as you'll see, but i think it might be a better short term solution than the setting env like you pointed out. Let me know what you think. CC @NadirJ, note i did upgrade CrewAI to the latest release with this, but this is also compatible with the prior release as well.

@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. bug Something isn't working labels Nov 18, 2024
@erichare erichare requested a review from NadirJ November 18, 2024 17:21
@github-actions github-actions bot removed the bug Something isn't working label Nov 18, 2024
Copy link

codspeed-hq bot commented Nov 18, 2024

CodSpeed Performance Report

Merging #4683 will degrade performances by 26.61%

Comparing fix-crewai-update (9c901c6) with main (3188517)

Summary

⚡ 2 improvements
❌ 1 regressions
✅ 12 untouched benchmarks

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Benchmarks breakdown

Benchmark main fix-crewai-update Change
test_successful_run_with_input_type_text 217.6 ms 296.5 ms -26.61%
test_successful_run_with_output_type_any 271.2 ms 168 ms +61.44%
test_successful_run_with_output_type_debug 295.5 ms 216.9 ms +36.21%

Copy link
Contributor

@ogabrielluiz ogabrielluiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Misclicked on approve

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Nov 18, 2024
@ogabrielluiz ogabrielluiz self-requested a review November 18, 2024 17:45
Copy link
Contributor

@ogabrielluiz ogabrielluiz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this.

What do you say we put this logic in the BaseCrewComponent?

The LLM can be passed to the agent or to the Crew. Also, do you know if they support all llms we support?

@dosubot dosubot bot removed the lgtm This PR has been approved by a maintainer label Nov 18, 2024
@erichare
Copy link
Collaborator Author

I like this.

What do you say we put this logic in the BaseCrewComponent?

The LLM can be passed to the agent or to the Crew. Also, do you know if they support all llms we support?

This is the main problem right now i think. I didn't see an easy way to find the "api_key" attribute in an arbitrary embeddings model (the langchain version). For example, in OpenAI this is openai_api_key. But obviously others its surely different. I was hoping there was a generic api_key attribute or some lookup to determine it. If you have any ideas there, that would be great... i also pass arbitrary kwargs to the LLM() function, but that has a specific api_key parameter that needs to be filled in so im sure there's models that wont work as is.

As for putting it in BaseCrewComponent, i think thats a great idea! I can update that.

@erichare
Copy link
Collaborator Author

@ogabrielluiz not sure if this would be a suitable solution in general, but we could do something like:

    def _find_api_key(self, model):
        """Attempts to find the API key attribute for a LangChain LLM model instance.

        Args:
            model: LangChain LLM model instance.

        Returns:
            The API key if found, otherwise None.
        """
        # Define the possible API key attribute names
        key_patterns = ["api_key", "key", "token"]

        # Iterate over the model attributes
        for attr in dir(model):

            # Check if the attribute name contains any of the key patterns
            if any(pattern in attr.lower() for pattern in key_patterns):
                value = getattr(model, attr, None)

                # Check if the value is a non-empty string
                if isinstance(value, str) and value:
                    return value

        return None

@ogabrielluiz
Copy link
Contributor

@ogabrielluiz not sure if this would be a suitable solution in general, but we could do something like:

    def _find_api_key(self, model):
        """Attempts to find the API key attribute for a LangChain LLM model instance.

        Args:
            model: LangChain LLM model instance.

        Returns:
            The API key if found, otherwise None.
        """
        # Define the possible API key attribute names
        key_patterns = ["api_key", "key", "token"]

        # Iterate over the model attributes
        for attr in dir(model):

            # Check if the attribute name contains any of the key patterns
            if any(pattern in attr.lower() for pattern in key_patterns):
                value = getattr(model, attr, None)

                # Check if the value is a non-empty string
                if isinstance(value, str) and value:
                    return value

        return None

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

@erichare
Copy link
Collaborator Author

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

I could be wrong about this but i believe one of the reasons for upgrading crewai was the move to langchain~=0.3.x.... so .... i agree its the best solution, but if thats correct then it may not be an easy option. I'll double check that though

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Nov 18, 2024
@ogabrielluiz
Copy link
Contributor

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

I could be wrong about this but i believe one of the reasons for upgrading crewai was the move to langchain~=0.3.x.... so .... i agree its the best solution, but if thats correct then it may not be an easy option. I'll double check that though

I see... Well. I guess, for now we focus on the short term solution. Maybe if the user passes a model that we haven't added support yet, raise an error.

@erichare
Copy link
Collaborator Author

I think our best bet for now would be to downgrade crewai to a version it worked. What do you think?

I could be wrong about this but i believe one of the reasons for upgrading crewai was the move to langchain~=0.3.x.... so .... i agree its the best solution, but if thats correct then it may not be an easy option. I'll double check that though

I see... Well. I guess, for now we focus on the short term solution. Maybe if the user passes a model that we haven't added support yet, raise an error.

I just updated the PR to factor the code a little better. i search for an key or token partial match in the attributes list - this works with the ones i've tested, i believe it works with all of the models we support that require a key, BUT - obviously its not perfect by any stretch and may cause a false positive - with that said, i think its better than the non-working solution from before (which really only works with OpenAI anyway).

If only there was some sort of from_langchain() method for an agent LLM like they have for tools (and im using in this PR). But alas, i don't see that: https://github.com/crewAIInc/crewAI/blob/main/src/crewai/llm.py

@ogabrielluiz
Copy link
Contributor

They use litellm which can be ok to integrate later. @phact has experience with it.

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Nov 18, 2024
@erichare erichare enabled auto-merge (squash) November 18, 2024 22:33
@erichare erichare linked an issue Nov 18, 2024 that may be closed by this pull request
@erichare erichare merged commit 2fa2580 into main Nov 18, 2024
29 checks passed
@erichare erichare deleted the fix-crewai-update branch November 18, 2024 23:04
mieslep pushed a commit to mieslep/langflow that referenced this pull request Nov 19, 2024
* fix: CrewAI-based flows with no extra openai

* [autofix.ci] apply automated fixes

* Clean up the location of the crewai model processing

* [autofix.ci] apply automated fixes

* Properly subclass the tasks and agents method

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
diogocabral pushed a commit to headlinevc/langflow that referenced this pull request Nov 26, 2024
* fix: CrewAI-based flows with no extra openai

* [autofix.ci] apply automated fixes

* Clean up the location of the crewai model processing

* [autofix.ci] apply automated fixes

* Properly subclass the tasks and agents method

---------

Co-authored-by: autofix-ci[bot] <114827586+autofix-ci[bot]@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

issue with langflow + crewai
2 participants