Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ECO-4952] Start implementing room lifecycle spec #19

Conversation

ruancomelli
Copy link
Collaborator

This is based on ably/specification#200 at b4a495e. It’s generated some questions, which I’ve asked on that PR.

Note that the spec has been through a few revisions since I started implementing this; I’ve tried to keep everything in sync but some older stuff might still be accidentally there.

I’ve implemented most of the specified behaviour for the ATTACH, DETACH, and RELEASE operations. I have not yet implemented RETRY since it’s quite different to those three and I wanted to get eyes on this first chunk of work; have created ably#51 for implementing it.

Of the operations that I have implemented, I have not implemented the following (this is accurately reflected by the @spec… tags in the tests):

The room lifecycle manager introduced by this commit is currently a standalone object, which is not integrated with the rest of the SDK. I’ve created ably#47 for doing this integration work.

The @spec… tags are based on the on the JS rules @ 8c9ce8b, but I camel-cased them and also decided that:

  • if a test doesn't relate to a spec point, it doesn't need any markers
  • there should be a way to know that a spec point is tested across multiple tests
  • there should be a way of marking a spec point as implemented but not tested (so that can show up differently in reporting; important given that we’re also going to use this report as a to-do list of what still needs to be implemented)

Part of ably#28.

Summary by CodeRabbit

Summary by CodeRabbit

  • New Features

    • Introduced new error codes for improved error handling in the chat SDK.
    • Added a new section in the contribution guidelines for better test documentation related to SDK features.
    • Implemented a RoomLifecycleManager to manage chat room contributor states effectively.
    • Added a SimpleClock protocol to facilitate asynchronous task management in testing.
    • Introduced a mock implementation of RoomLifecycleContributorChannel for testing purposes.
  • Bug Fixes

    • Enhanced error reporting for attachment and detachment failures within the chat SDK.
  • Tests

    • Added comprehensive unit tests for the RoomLifecycleManager to ensure robust lifecycle management of chat rooms.
    • Introduced mock implementations to facilitate controlled testing scenarios.
    • Improved assertion capabilities in the testing framework for better error handling validation.

This is based on [1] at b4a495e. It’s generated some questions, which
I’ve asked on that PR.

Note that the spec has been through a few revisions since I started
implementing this; I’ve tried to keep everything in sync but some older
stuff might still be accidentally there.

I’ve implemented most of the specified behaviour for the ATTACH, DETACH,
and RELEASE operations. I have not yet implemented RETRY since it’s
quite different to those three and I wanted to get eyes on this first
chunk of work; have created ably#51 for implementing it.

Of the operations that I have implemented, I have not implemented the
following (this is accurately reflected by the @SPEC… tags in the
tests):

- Lifecycle behaviour that relates to another ongoing operation (created
  ably#52)

- Lifecycle behaviour relating to “transient disconnect timeouts”
  (created ably#48)

- Lifecycle behaviour that occurs “asynchronously” outside an operation
  (created ably#50)

- CHA-RL1g2, which refers to emitting “discontinuity events” which are a
  concept not yet implemented; this will come in ably#53.

The room lifecycle manager introduced by this commit is currently a
standalone object, which is not integrated with the rest of the SDK.
I’ve created ably#47 for doing this integration work.

The @SPEC… tags are based on the on the JS rules [2] @ 8c9ce8b, but I
camel-cased them and also decided that:

- if a test doesn't relate to a spec point, it doesn't need any markers

- there should be a way to know that a spec point is tested across
  multiple tests

- there should be a way of marking a spec point as implemented but not
  tested (so that can show up differently in reporting; important given
  that we’re also going to use this report as a to-do list of what still
  needs to be implemented)

Part of ably#28.

[1] ably/specification#200
[2] https://github.com/ably/ably-js/blob/main/CONTRIBUTING.md#tests-alignment-with-the-ably-features-specification
@ruancomelli
Copy link
Collaborator Author

This is an experiment review for experiment multiple_mermaid_diagram_types
Run ID: multiple_mermaid_diagram_types/run_2024-10-01T15-06-26_v1-23-0-47-g5ce327d76-dirty
The benchmark review for this pull request can be found at #11.
This pull request was cloned from https://github.com/ably-labs/ably-chat-swift/pull/54. (Note: the URL is not a link to avoid triggering a notification on the original pull request.)

Experiment configuration
review_config:
  # User configuration for the review
  # - benchmark - use the user config from the benchmark reviews
  # - <value> - use the value directly
  user_review_config:
    enable_ai_review: true
    enable_rule_comments: false

    enable_complexity_comments: false
    enable_security_comments: false
    enable_tests_comments: false
    enable_comment_suggestions: false
    enable_functionality_review: false

    enable_pull_request_summary: false
    enable_review_guide: true

    enable_approvals: false

  ai_review_config:
    # The model responses to use for the experiment
    # - benchmark - use the model responses from the benchmark reviews
    # - llm - call the language model to generate responses
    model_responses:
      comments_model: benchmark
      comment_area_model: benchmark
      comment_validation_model: benchmark
      comment_suggestion_model: benchmark
      complexity_model: benchmark
      functionality_model: benchmark
      security_model: benchmark
      tests_model: benchmark
      pull_request_summary_model: benchmark
      review_guide_model: llm
      overall_comments_model: benchmark

# The pull request dataset to run the experiment on
pull_request_dataset:
  # Sourcery core examples:
- https://github.com/sourcery-ai/core/pull/4607
- https://github.com/sourcery-ai/core/pull/4631
- https://github.com/sourcery-ai/core/pull/4647
  # CodeRabbit examples:
- https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
- https://github.com/2lambda123/galaxyproject-galaxy/pull/12
- https://github.com/a0v0/avtoolz/pull/79
- https://github.com/adityask98/Hotaru/pull/10
- https://github.com/agdas/vscode/pull/2
- https://github.com/agluszak/hirschgarten/pull/2
- https://github.com/alexsnow348/insightface/pull/46
- https://github.com/alikuxac/utilities/pull/10
- https://github.com/AlphaDev87/timba-api/pull/49
- https://github.com/AngeloTadeucci/Maple2/pull/239
- https://github.com/AngeloTadeucci/Maple2.File/pull/36
- https://github.com/AngeloTadeucci/Maple2/pull/233
  # Examples where CodeRabbit does not generate diagrams
- https://github.com/baptisteArno/typebot.io/pull/1778
- https://github.com/btipling/foundations/pull/33
- https://github.com/btipling/foundations/pull/31
- https://github.com/chintu-777/jaeger/pull/1
- https://github.com/coji/remix-docs-ja/pull/55
- https://github.com/DaveMBush/SmartNgRX/pull/622
- https://github.com/DaveMBush/SmartNgRX/pull/481
- https://github.com/dkittle/party-connections/pull/6
- https://github.com/Drajad-Kusuma-Adi/onstudy-backend/pull/6
- https://github.com/imaami/libcanth/pull/2
  # Requested by Tim
- https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
- https://github.com/2lambda123/galaxyproject-galaxy/pull/12
- https://github.com/a0v0/avtoolz/pull/79
- https://github.com/adityask98/Hotaru/pull/10
- https://github.com/agdas/vscode/pull/2
- https://github.com/agluszak/hirschgarten/pull/2
- https://github.com/4DNucleome/PartSeg/pull/1183
- https://github.com/ably/ably-cocoa/pull/1973
- https://github.com/ably-labs/ably-chat-swift/pull/54
- https://github.com/ably-labs/ably-chat-swift/pull/35
  # A ton of new classes:
- https://github.com/sourcery-ai/core/pull/4618
  # Including database changes to evaluate entity-relationship diagrams
- https://github.com/sourcery-ai/core/pull/4579
  # TODO: find some with Alembic
  # Brilliant existing examples:
- https://github.com/sourcery-ai/core/pull/4680
- https://github.com/sourcery-ai/core/pull/4684

# Questions to ask to label the review comments
review_comment_labels: []
# - label: correct
#   question: Is this comment correct?

# Benchmark reviews generated by running
#   python -m scripts.experiment benchmark <experiment_name>
benchmark_reviews:
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4607
  review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/374
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4631
  review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/375
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4647
  review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/376
- dataset_pull_request: https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
  review_pull_request: https://github.com/sourcery-ai-experiments/-Orange-OpenSource-oorobot/pull/5
- dataset_pull_request: https://github.com/2lambda123/galaxyproject-galaxy/pull/12
  review_pull_request: https://github.com/sourcery-ai-experiments/galaxyproject-galaxy/pull/13
- dataset_pull_request: https://github.com/adityask98/Hotaru/pull/10
  review_pull_request: https://github.com/sourcery-ai-experiments/Hotaru/pull/14
- dataset_pull_request: https://github.com/agdas/vscode/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/vscode/pull/14
- dataset_pull_request: https://github.com/agluszak/hirschgarten/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/hirschgarten/pull/13
- dataset_pull_request: https://github.com/alikuxac/utilities/pull/10
  review_pull_request: https://github.com/sourcery-ai-experiments/utilities/pull/4
- dataset_pull_request: https://github.com/AlphaDev87/timba-api/pull/49
  review_pull_request: https://github.com/sourcery-ai-experiments/timba-api/pull/12
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/239
  review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/23
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2.File/pull/36
  review_pull_request: https://github.com/sourcery-ai-experiments/Maple2.File/pull/12
- dataset_pull_request: https://github.com/AngeloTadeucci/Maple2/pull/233
  review_pull_request: https://github.com/sourcery-ai-experiments/Maple2/pull/24
- dataset_pull_request: https://github.com/baptisteArno/typebot.io/pull/1778
  review_pull_request: https://github.com/sourcery-ai-experiments/typebot.io/pull/7
- dataset_pull_request: https://github.com/btipling/foundations/pull/33
  review_pull_request: https://github.com/sourcery-ai-experiments/foundations/pull/13
- dataset_pull_request: https://github.com/btipling/foundations/pull/31
  review_pull_request: https://github.com/sourcery-ai-experiments/foundations/pull/14
- dataset_pull_request: https://github.com/chintu-777/jaeger/pull/1
  review_pull_request: https://github.com/sourcery-ai-experiments/jaeger/pull/7
- dataset_pull_request: https://github.com/coji/remix-docs-ja/pull/55
  review_pull_request: https://github.com/sourcery-ai-experiments/remix-docs-ja/pull/7
- dataset_pull_request: https://github.com/DaveMBush/SmartNgRX/pull/622
  review_pull_request: https://github.com/sourcery-ai-experiments/SmartNgRX/pull/13
- dataset_pull_request: https://github.com/DaveMBush/SmartNgRX/pull/481
  review_pull_request: https://github.com/sourcery-ai-experiments/SmartNgRX/pull/14
- dataset_pull_request: https://github.com/dkittle/party-connections/pull/6
  review_pull_request: https://github.com/sourcery-ai-experiments/party-connections/pull/7
- dataset_pull_request: https://github.com/Drajad-Kusuma-Adi/onstudy-backend/pull/6
  review_pull_request: https://github.com/sourcery-ai-experiments/onstudy-backend/pull/7
- dataset_pull_request: https://github.com/imaami/libcanth/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/libcanth/pull/7
- dataset_pull_request: https://github.com/2lambda123/-Orange-OpenSource-oorobot/pull/15
  review_pull_request: https://github.com/sourcery-ai-experiments/-Orange-OpenSource-oorobot/pull/6
- dataset_pull_request: https://github.com/2lambda123/galaxyproject-galaxy/pull/12
  review_pull_request: https://github.com/sourcery-ai-experiments/galaxyproject-galaxy/pull/14
- dataset_pull_request: https://github.com/adityask98/Hotaru/pull/10
  review_pull_request: https://github.com/sourcery-ai-experiments/Hotaru/pull/15
- dataset_pull_request: https://github.com/agdas/vscode/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/vscode/pull/15
- dataset_pull_request: https://github.com/agluszak/hirschgarten/pull/2
  review_pull_request: https://github.com/sourcery-ai-experiments/hirschgarten/pull/14
- dataset_pull_request: https://github.com/4DNucleome/PartSeg/pull/1183
  review_pull_request: https://github.com/sourcery-ai-experiments/PartSeg/pull/7
- dataset_pull_request: https://github.com/ably/ably-cocoa/pull/1973
  review_pull_request: https://github.com/sourcery-ai-experiments/ably-cocoa/pull/6
- dataset_pull_request: https://github.com/ably-labs/ably-chat-swift/pull/54
  review_pull_request: https://github.com/sourcery-ai-experiments/ably-chat-swift/pull/11
- dataset_pull_request: https://github.com/ably-labs/ably-chat-swift/pull/35
  review_pull_request: https://github.com/sourcery-ai-experiments/ably-chat-swift/pull/12
- dataset_pull_request: https://github.com/sourcery-ai/core/pull/4579
  review_pull_request: https://github.com/sourcery-ai-experiments/core/pull/377

@SourceryAI
Copy link

SourceryAI commented Oct 1, 2024

Reviewer's Guide by Sourcery

This pull request implements the initial room lifecycle management functionality for the Ably Chat SDK. It introduces a RoomLifecycleManager that handles the ATTACH, DETACH, and RELEASE operations as specified in the Chat SDK features specification. The implementation includes error handling, state transitions, and retry mechanisms for various scenarios.

No diagrams generated as the changes look simple and do not need a visual representation.

File-Level Changes

Change Details Files
Implemented room lifecycle management
  • Created RoomLifecycleManager to handle ATTACH, DETACH, and RELEASE operations
  • Implemented state transitions (e.g., ATTACHING, DETACHING, RELEASING)
  • Added error handling for various failure scenarios
  • Implemented retry mechanisms for failed operations
Sources/AblyChat/RoomLifecycleManager.swift
Added new error codes and types for room lifecycle operations
  • Introduced new error codes for attachment and detachment failures
  • Created ChatError enum for more detailed error information
  • Added helper methods for error creation and localized descriptions
Sources/AblyChat/Errors.swift
Created mock objects and helpers for testing
  • Implemented MockRoomLifecycleContributorChannel for simulating channel behavior
  • Created MockSimpleClock for controlling time-based operations in tests
  • Added helper functions and structs for test setup and assertions
Tests/AblyChatTests/Mocks/MockRoomLifecycleContributorChannel.swift
Tests/AblyChatTests/Mocks/MockSimpleClock.swift
Tests/AblyChatTests/Helpers/Helpers.swift
Implemented comprehensive test suite for RoomLifecycleManager
  • Added tests for ATTACH, DETACH, and RELEASE operations
  • Implemented tests for various error scenarios and state transitions
  • Created tests for retry mechanisms and contributor behavior
Tests/AblyChatTests/RoomLifecycleManagerTests.swift
Updated contribution guidelines
  • Added instructions for attributing tests to specification points
  • Introduced @SPEC, @specOneOf, and @specPartial tags for test documentation
  • Added guidelines for marking untested specification points
CONTRIBUTING.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it.

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link

@SourceryAI SourceryAI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @ruancomelli - I've reviewed your changes - here's some feedback:

Overall Comments:

  • Break down large pull requests into smaller, more manageable pieces for easier review and testing.
  • Ensure all TODOs and outstanding questions are resolved before merging to avoid potential issues.
  • Prioritize the integration of the RoomLifecycleManager with the rest of the SDK to ensure usability.
Here's what I looked at during the review
  • 🟡 General issues: 2 issues found
  • 🟡 Documentation: 2 issues found

LangSmith trace

Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +19 to +28
case messagesAttachmentFailed = 102_001
case presenceAttachmentFailed = 102_002
case reactionsAttachmentFailed = 102_003
case occupancyAttachmentFailed = 102_004
case typingAttachmentFailed = 102_005

case messagesDetachmentFailed = 102_050
case presenceDetachmentFailed = 102_051
case reactionsDetachmentFailed = 102_052
case occupancyDetachmentFailed = 102_053

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider using a more systematic approach for error code values

The current use of magic numbers for error codes, while grouped logically, could be improved. Consider using constants or a more descriptive naming convention to make the values' significance clearer and easier to maintain.

}

/// Implements CHA-RL1’s `ATTACH` operation.
internal func performAttachOperation() async throws {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider breaking down complex methods into smaller, focused functions

The performAttachOperation and performDetachOperation methods are quite long and handle multiple cases. Breaking these down into smaller, more focused methods could improve readability and maintainability.

When writing a test that relates to a spec point from the Chat SDK features spec, add a comment that contains one of the following tags:

- `@spec <spec-item-id>` — The test case directly tests all the functionality documented in the spec item.
- `@specOneOf(m/n) <spec-item-id>` — The test case is the m<sup>th</sup> of n test cases which, together, test all the functionality documented in the spec item.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (documentation): Replace Markdown superscript syntax with plain text

The superscript syntax 'th' won't render correctly in plain text. Consider using 'mth' instead.

- `@specOneOf(m/n) <spec-item-id>` — The test case is the m<sup>th</sup> of n test cases which, together, test all the functionality documented in the spec item.
- `@specPartial <spec-item-id>` — The test case tests some, but not all, of the functionality documented in the spec item. This is different to `@specOneOf` in that it implies that the test suite does not fully test this spec item.

The `<spec-item-id>` parameter should be a spec item identifier such as `CHA-RL3g`.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion (documentation): Standardize terminology: 'spec point' vs 'spec item'

The document uses both 'spec point' and 'spec item'. Consider standardizing on one term for consistency throughout the document.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants