Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: BigtableIO.Read can't retrive large row that exceeds 256MB #33039

Open
1 of 17 tasks
waterxjw opened this issue Nov 7, 2024 · 6 comments
Open
1 of 17 tasks

[Bug]: BigtableIO.Read can't retrive large row that exceeds 256MB #33039

waterxjw opened this issue Nov 7, 2024 · 6 comments

Comments

@waterxjw
Copy link

waterxjw commented Nov 7, 2024

What happened?

I want to export the entire table of Bigtable via BigtableIO.Read. But there are some rows that exceeds the limit of 256MB, which leads to failure. It seems that Bigtable server refused to return the row that exceeds 256MB.

I found some infomation in Bigtable Documents, which suggests me to use paginate my request and use a cells per row limit filter and a cells per row offset filter.

But I don't know how to apply this method with BigtableIO.Read, considering I want to export all the data of table. I don't know how to implement dynamic paginate by cell in one pipeline.

I would like to know if BigtableIO.Read currently has the capability to meet the requirements of my scenario. If it cannot, are there any alternative solutions that can help me elegantly export all the data?

Issue Priority

Priority: 2 (default / most bugs should be filed as P2)

Issue Components

  • Component: Python SDK
  • Component: Java SDK
  • Component: Go SDK
  • Component: Typescript SDK
  • Component: IO connector
  • Component: Beam YAML
  • Component: Beam examples
  • Component: Beam playground
  • Component: Beam katas
  • Component: Website
  • Component: Infrastructure
  • Component: Spark Runner
  • Component: Flink Runner
  • Component: Samza Runner
  • Component: Twister2 Runner
  • Component: Hazelcast Jet Runner
  • Component: Google Cloud Dataflow Runner
Copy link
Contributor

github-actions bot commented Nov 7, 2024

Label Bigtable cannot be managed because it does not exist in the repo. Please check your spelling.

@waterxjw
Copy link
Author

waterxjw commented Nov 7, 2024

.add-labels io,gcp,bigtable

@liferoad
Copy link
Collaborator

liferoad commented Nov 7, 2024

cc @mutianf @igorbernstein2

@CoderUper
Copy link

Can you fixed it now? I aslo meet this problem. @waterxjw

@waterxjw
Copy link
Author

Can you fixed it now? I aslo meet this problem. @waterxjw

Not yet

@stankiewicz
Copy link
Contributor

stankiewicz commented Dec 12, 2024

hi,
I assume you have known and limited to reasonable value amount of cells.
To add mentioned filters you need to build a chain:

RowFilter chain = RowFilter.newBuilder().setChain(RowFilter.Chain.newBuilder()
            .addFilters(RowFilter.newBuilder().setCellsPerRowLimitFilter(1).build())
            .addFilters(RowFilter.newBuilder().setCellsPerRowOffsetFilter(1).build())
            .build()).build();
readTransformCell1 = BigtableIO.read().withInstanceId("instance").withProjectId("project").withRowFilter(chain);

You would have to loop through cells offset and then Flatten those readTransformCellN
You will end up with elements <rowKey, Row with 1 cell> where multiple elements share same rowKey. Then it's up to you if you want to do group by key and build back large rows.

If you have unknown number of large cells then I would recommend building a pipeline that will fetch row keys only, redistribute and write parDo to fetch cells with similar filters in a loop using regular bigtable client.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants