Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finish adding type annotations #3

Open
wants to merge 3 commits into
base: base-sha/3e61055158e16bae52377ccab5169efc02d13607
Choose a base branch
from

Conversation

sourcery-ai-experiments-bot
Copy link
Collaborator

@sourcery-ai-experiments-bot sourcery-ai-experiments-bot commented Jul 8, 2024

Summary by Sourcery

This pull request adds type annotations to various methods and functions across multiple files in the project. These changes enhance type safety and improve code clarity without altering the core functionality.

  • Enhancements:
    • Added type annotations to various methods and functions across multiple files to improve type safety and code clarity.

@sourcery-ai-experiments-bot
Copy link
Collaborator Author

This is a benchmark review for experiment review_of_reviews_20240708.
Run ID: review_of_reviews_20240708/benchmark_2024-07-08T00-18-37_v1-20-0-4-gb56c705c9.

This pull request was cloned from https://github.com/hbmartin/graphviz2drawio/pull/85. (Note: the URL is not a link to avoid triggering a notification on the original pull request.)

Experiment configuration
review_config:
  # User configuration for the review
  # - benchmark - use the user config from the benchmark reviews
  # - <value> - use the value directly
  user_review_config:
    enable_ai_review: true
    enable_rule_comments: false

    enable_complexity_comments: benchmark
    enable_security_comments: benchmark
    enable_tests_comments: benchmark
    enable_comment_suggestions: benchmark
    enable_functionality_review: benchmark

    enable_pull_request_summary: benchmark
    enable_review_guide: benchmark

    enable_approvals: true

  ai_review_config:
    # The model responses to use for the experiment
    # - benchmark - use the model responses from the benchmark reviews
    # - llm - call the language model to generate responses
    model_responses:
      comments_model: benchmark
      comment_area_model: benchmark
      comment_validation_model: benchmark
      comment_suggestion_model: benchmark
      complexity_model: benchmark
      docstrings_model: benchmark
      functionality_model: benchmark
      security_model: benchmark
      tests_model: benchmark
      pull_request_summary_model: benchmark
      review_guide_model: benchmark

# The pull request dataset to run the experiment on
pull_request_dataset:
- https://github.com/mraniki/iamlistening/pull/334
- https://github.com/mraniki/cefi/pull/475
- https://github.com/mraniki/MyLLM/pull/581
- https://github.com/mraniki/dxsp/pull/689
- https://github.com/jschalk/jaar/pull/239
- https://github.com/jschalk/jaar/pull/241
- https://github.com/jschalk/jaar/pull/242
- https://github.com/iptux-src/iptux/pull/620
- https://github.com/iptux-src/iptux/pull/622
- https://github.com/hacksider/Deep-Live-Cam/pull/46
- https://github.com/mnbf9rca/super_simple_tfl_status/pull/114
- https://github.com/mnbf9rca/super_simple_tfl_status/pull/115
- https://github.com/RockProfile/Django-deployment/pull/1
- https://github.com/hbmartin/graphviz2drawio/pull/83
- https://github.com/fairdataihub/codefair-app/pull/28
- https://github.com/totaldebug/atomic-calendar-revive/pull/1518
- https://github.com/iammiracle01/portfolio/pull/1
- https://github.com/iammiracle01/portfolio/pull/2
- https://github.com/iammiracle01/portfolio/pull/3
- https://github.com/supermario-ai/gpt-crawler/pull/1
- https://github.com/hbmartin/graphviz2drawio/pull/84
- https://github.com/hbmartin/graphviz2drawio/pull/85
- https://github.com/dreamerminsk/tasked/pull/85
- https://github.com/dreamerminsk/tasked/pull/86
- https://github.com/dreamerminsk/tasked/pull/84
- https://github.com/haalasz/fm-tools/pull/9
- https://github.com/haalasz/fm-tools/pull/10
- https://github.com/iptux-src/iptux/pull/619
- https://github.com/code-Harsh247/medsymptom/pull/1
- https://github.com/code-Harsh247/medsymptom/pull/2
- https://github.com/cpp-lln-lab/bidspm/pull/1263
- https://github.com/cpp-lln-lab/bidspm/pull/1264
- https://github.com/cpp-lln-lab/bidspm/pull/1265
- https://github.com/luiscarlop/Judge_AI/pull/22
- https://github.com/NoNormalCreeper/nonebot_plugin_wolfram/pull/6
- https://github.com/osama1998H/kalima/pull/39
- https://github.com/osama1998H/kalima/pull/40
- https://github.com/osama1998H/kalima/pull/41
- https://github.com/jackdewinter/pymarkdown/pull/1131
- https://github.com/Eliver-Salazar/PED/pull/12
- https://github.com/NextAlone/Nagram/pull/40
- https://github.com/strawberry-graphql/strawberry-django/pull/575
- https://github.com/strawberry-graphql/strawberry/pull/3558
- https://github.com/strawberry-graphql/strawberry/pull/3559
- https://github.com/Ruin2121/yahooquery/pull/9
- https://github.com/gdsfactory/gdsfactory/pull/2951
- https://github.com/gdsfactory/gdsfactory/pull/2954
- https://github.com/gdsfactory/gdsfactory/pull/2956
- https://github.com/gdsfactory/gdsfactory/pull/2957
- https://github.com/gdsfactory/cspdk/pull/51
review_comment_labels:
- label: correct
  question: Is this comment correct?
- label: helpful
  question: Is this comment helpful?
- label: comment-type
  question: Is the comment type correct?
- label: comment-area
  question: Is the comment area correct?
- label: llm-test
  question: |
    What type of LLM test could this comment become?
    - 👍 - this comment is really good/important and we should always make it
    - 👎 - this comment is really bad and we should never make it
    - no reaction - don't turn this comment into an LLM test

# Benchmark reviews generated by running
#   python -m scripts.experiment benchmark <experiment_name>
benchmark_reviews: []

@SourceryAI
Copy link

SourceryAI commented Jul 8, 2024

Reviewer's Guide by Sourcery

This pull request adds type annotations to various methods and functions across multiple files in the project. These changes enhance type safety and improve code clarity.

File-Level Changes

Files Changes
graphviz2drawio/models/Errors.py
graphviz2drawio/mx/Text.py
graphviz2drawio/mx/Edge.py
graphviz2drawio/models/Rect.py
graphviz2drawio/models/SVG.py
graphviz2drawio/mx/GraphObj.py
graphviz2drawio/mx/NodeFactory.py
graphviz2drawio/models/Arguments.py
graphviz2drawio/mx/Curve.py
graphviz2drawio/mx/CurveFactory.py
graphviz2drawio/mx/EdgeFactory.py
graphviz2drawio/mx/MxGraph.py
graphviz2drawio/mx/Node.py
graphviz2drawio/mx/Styles.py
Added type annotations to various methods and functions to improve type safety and code clarity.

Tips
  • Trigger a new Sourcery review by commenting @sourcery-ai review on the pull request.
  • Continue your discussion with Sourcery by replying directly to review comments.
  • You can change your review settings at any time by accessing your dashboard:
    • Enable or disable the Sourcery-generated pull request summary or reviewer's guide;
    • Change the review language;
  • You can always contact us if you have any questions or feedback.

Copy link

@SourceryAI SourceryAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @sourcery-ai-experiments-bot - I've reviewed your changes and they look great!

Here's what I looked at during the review
  • 🟡 General issues: 5 issues found
  • 🟢 Security: all looks good
  • 🟢 Testing: all looks good
  • 🟢 Complexity: all looks good
  • 🟢 Documentation: all looks good

LangSmith trace

Help me be more useful! Please click 👍 or 👎 on each comment to tell me if it was helpful.

)


class MissingTextError(GdValueError):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue: Error message for MissingTextError is incorrect.

The docstring for MissingTextError should be updated to reflect that it is for missing text, not a title.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment helpful?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the comment type correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the comment area correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What type of LLM test could this comment become?

  • 👍 - this comment is really good/important and we should always make it
  • 👎 - this comment is really bad and we should never make it
  • no reaction - don't turn this comment into an LLM test

@@ -47,9 +50,12 @@ def to_simple_value(self) -> str:
return text

@staticmethod
def from_svg(t):
def from_svg(t: Element) -> "Text":

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nitpick: Consider using single quotes for consistency.

Using single quotes for the return type annotation would be more consistent with the rest of the codebase.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment helpful?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the comment type correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the comment area correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What type of LLM test could this comment become?

  • 👍 - this comment is really good/important and we should always make it
  • 👎 - this comment is really bad and we should never make it
  • no reaction - don't turn this comment into an LLM test

@@ -62,7 +63,7 @@ def _extract_fill_and_stroke(g: Element) -> tuple[str | None, str | None]:
return g.attrib.get("fill", None), g.attrib.get("stroke", None)

@staticmethod
def _extract_texts(g: Element):
def _extract_texts(g: Element) -> list[Text]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider handling edge cases for empty or malformed elements.

It might be beneficial to handle edge cases where the element g is empty or malformed to prevent potential runtime errors.

Suggested change
def _extract_texts(g: Element) -> list[Text]:
def _extract_texts(g: Element) -> list[Text]:
if g is None or not hasattr(g, 'text'):
return []

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment helpful?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the comment type correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the comment area correct?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What type of LLM test could this comment become?

  • 👍 - this comment is really good/important and we should always make it
  • 👎 - this comment is really bad and we should never make it
  • no reaction - don't turn this comment into an LLM test

@@ -13,7 +15,7 @@ def __init__(self, coords: CoordsTranslate, *, is_directed: bool) -> None:
self.curve_factory = CurveFactory(coords)
self.is_directed = is_directed

def from_svg(self, g) -> Edge:
def from_svg(self, g: Element) -> Edge:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider adding validation for the input element.

Adding validation for the input element g could help catch errors early and improve the robustness of the method.

Suggested change
def from_svg(self, g: Element) -> Edge:
from xml.etree.ElementTree import Element
def from_svg(self, g: Element) -> Edge:
if not isinstance(g, Element):
raise TypeError("Input must be an instance of xml.etree.ElementTree.Element")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this comment correct?

@@ -97,7 +98,7 @@ def add_node(self, node: Node) -> None:
self.add_mx_geo(node_element, node.rect)

@staticmethod
def add_mx_geo(element, rect=None) -> None:
def add_mx_geo(element: Element, rect: Rect | None = None) -> None:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider using Optional for type hinting.

Using Optional[Rect] instead of Rect | None would be more consistent with common type hinting practices.

Suggested change
def add_mx_geo(element: Element, rect: Rect | None = None) -> None:
from typing import Optional
def add_mx_geo(element: Element, rect: Optional[Rect] = None) -> None:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants