Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add interface for NoSQL storage #214

Draft
wants to merge 114 commits into
base: master
Choose a base branch
from
Draft

Add interface for NoSQL storage #214

wants to merge 114 commits into from

Conversation

mcopik
Copy link
Collaborator

@mcopik mcopik commented Jul 26, 2024

We want to add NoSQL storage and a test benchmark implementing simple CRUD API - emulate a shopping cart, like on Amazon.

  • Add Python CRUD benchmark.
  • Add a very simple benchmark acting as a get/put API
  • Rename keys to primary and clustering.
  • Verify the solution works for both having single and composite keys.
  • Use CLI only when needed; dont' start it proactively on cached tables.
  • Add distinction between insert/upsert (ignore if it exists or not?)
  • Don't create GCP/Azure envs for tables - when not needed
  • cherry-pick workflow commits fixing env update on cold start
  • enforcing cold start on Azure - is the current update is sufficient?
  • Add NoSQL storage in Node.js
  • Add Node.js CRUD benchmark.
  • Support DynamoDB on AWS
  • Support CosmosDB on Azure
  • Support Firestore/Datastore on Google
  • Create a nice and consistent way of deploying both Minio & ScyllaDB simultaneously
  • Unify OpenWhisk/Local storage resources
  • Support ScyllaDB on local deployment
  • Verify caching of Minio/ScyllaDB on local
  • Verify it works with OpenWhisk
  • Verify regression still works (CLI init).
  • Verify it works with perf-cost experiment; we now modify env variables
  • Add documentation on NoSQL design and configuration.
  • Add documentation on the new storage CLI.

Copy link

coderabbitai bot commented Jul 26, 2024

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@mcopik mcopik changed the title [system] Add basic interface for allocating NoSQL storage Add interface for allocating NoSQL storage Jul 26, 2024
@mcopik mcopik changed the title Add interface for allocating NoSQL storage Add interface for NoSQL storage Jul 26, 2024
Copy link
Collaborator

@oanarosca oanarosca left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About 2/3 through the files - publishing some comments now

"python",
"nodejs"
],
"modules": [
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is that we will list resource 'modules' here e.g. nosql, storage, queues. How would this be adapted in the case of applications - where each function might have different requirements - such that the code that processes this json and creates said resources can do so as seamlessly as possible?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca That is an excellent question on which I'm working right now. At this moment, I think the best option would be to make the json a collection of configuration points, one general (benchmark's language), and the other with per-function config.


# Set initial data

nosql_func(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: perhaps this could be renamed to something more descriptive

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca Good catch, will fix! :-)

@@ -3,6 +3,14 @@
sys.path.append(os.path.join(os.path.dirname(__file__), '.python_packages/lib/site-packages'))


if 'NOSQL_STORAGE_DATABASE' in os.environ:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reading this I was slightly misled into thinking that this must be set by user, though I see it is actually set in gcp.py. Perhaps a comment along the lines of "this env var is set in this function in this file" would help clarify

benchmarks/wrappers/gcp/python/nosql.py Show resolved Hide resolved
sebs/faas/resources.py Show resolved Hide resolved
self._region = region
self._cloud_resources = resources

# Map benchmark -> orig_name -> table_name
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this?

pass

@abstractmethod
def writer_func(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about write or write_to_table?

(3) Update cached data if anything new was created
"""

def create_benchmark_tables(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The singular form (create_benchmark_table) would probably be more appropriate.

I am even wondering whether we could only have create_table as a method and skip create_benchmark_table altogether -- or is there a reason why this is done like this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The main idea was to force a single behavior on all systems: check cache, create a name, and create a table if necessary.

self.client = session.client(
"dynamodb",
region_name=region,
aws_access_key_id=access_key,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't use these in my implementation - is there something here that requires them? Not a strong opinion either way

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca What do you mean here? The access keys?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes - I found that this will still work without them, but it's a small thing anyway 🙂

aws_secret_access_key=secret_key,
)

# Map benchmark -> orig_name -> table_name
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Clarifying what orig_name means here would be useful - perhaps you could give an example to show exactly how the table name maps to the original name (I see below that it's quite a construction)

raise NotImplementedError()

def remove_table(self, name: str) -> str:
raise NotImplementedError()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure if you still plan to use these/have already implemented them, but feel free to take inspiration from my branch

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca Thanks! I will try to cherry-pick the commits or copy the code :-)

of buckets. Buckets may be created or retrieved from cache.

:param benchmark: benchmark name
:param buckets: tuple of required input/output buckets
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is actually missing from the function signature/implementation. Perhaps the implementation is still incomplete?

Also, do we plan a similar approach for storage? To pass some parameters here and have everything immediately initialised?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca Good catch, these are old comments. We changed the implementation :-)

for original_name, actual_name in nosql_storage.get_tables(
code_package.benchmark
).items():
envs[f"NOSQL_STORAGE_TABLE_{original_name}"] = actual_name
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are env vars the best approach in this situation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca For object storage buckets, we had to pass their names as part of the input. I think that using envs is a better choice, as we don't have to adjust the input of each invocation.

Do you see a better way of doing that?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What other cloud resources might go in this file (considering the pretty generic name)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca I will add docs here - I simply separated the implementation from config, as with CosmosDB account the implementation of Azure resources became quite large.

Requires Azure CLI instance in Docker to obtain storage account details.

:param benchmark:
:param buckets: number of input and output buckets
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing here too

"""

self.logging.info(f"Allocating a new Firestore database {database_name}")
self._cli_instance.execute(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the reason why we are using the CLI to do this?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca AFAIK there's no API way to allocate a Firestore database except of using gcloud tool. Did you find it somewhere? It would greatly simplify the implementation :-)


return envs

def _generate_function_envs(self, code_package: Benchmark) -> dict:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What other env vars do we think this function will have to support in the future (so apart from NoSQL-related)?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@oanarosca If we use Redis for something, we also need to pass Reddis access details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants