-
-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[refactor] Reduce cognitive complexity from multiple return statements #228
[refactor] Reduce cognitive complexity from multiple return statements #228
Conversation
1cc56e9
to
06b18ff
Compare
06b18ff
to
3b7a31c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution!
( | ||
getattr(self, attribute) | ||
== getattr(other, attribute) | ||
for attribute in [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it might be less efficient because of the nested loop situation (I'm not sure how python actually interprets this, but it looks like a nested loop to me). However, in the interest of fixing the code complexity issue and because this looks more elegant and pythonic, I'm going to try to avoid premature optimization and go with it. Thanks 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a good question! The outer loop is a generator, so it should only pull out the attributes from the inner loop 1 at a time. This means that if the first attribute test fails, the other attributes don't get tested as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm open to coming back and optimizing this if needed in the future :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we're a little early to explore performance testing. Once we do, this might be an area we look at, but it all depends. For now, we need to get the front end coded up and possibly other clients like slackbot hooked into it before we'll have enough users to worry about performance.
Thanks for the offer 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, got it, just saw OperationCode/front-end#661. Is there a public API that can be pinged to test out this resource without using the Slackbot?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if you mean to say "Where is the resources API deployed?" But that's the question I'm going to answer. https://resources.operationcode.org has the docs, follow them to use the API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, despite my awkward phrasing, you figured out the question that I had meant to ask. Thanks!
Overview
Removes 4 CodeClimate warnings about having many returns, and 1 warning about having high cyclomatic complexity, while preserving the same behavior. This was motivated by Fix Code Climate issues #82 .
Uses a generator so that the test will short-circuit if any of the earlier checks fail.
See https://codeclimate.com/github/OperationCode/resources_api/app/models.py/source#issue-a49e24d316f36fbe89761497d99f0928 for the warnings of interest.