You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When upgrading from 2.33.0 to 2.61.0, job on spark runner start to fail on OutOfMemory.
The culprit is that this change: #15637 which effectively disabled memory sensitive GBK translation using secondary sort introduced by: https://issues.apache.org/jira/browse/BEAM-5392
I propose to restore the memory sensitive translation by considering it first and only if it can not be used consider default GBK translation and it's global window optimisation.
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components
Component: Python SDK
Component: Java SDK
Component: Go SDK
Component: Typescript SDK
Component: IO connector
Component: Beam YAML
Component: Beam examples
Component: Beam playground
Component: Beam katas
Component: Website
Component: Infrastructure
Component: Spark Runner
Component: Flink Runner
Component: Samza Runner
Component: Twister2 Runner
Component: Hazelcast Jet Runner
Component: Google Cloud Dataflow Runner
The text was updated successfully, but these errors were encountered:
What happened?
When upgrading from 2.33.0 to 2.61.0, job on spark runner start to fail on OutOfMemory.
The culprit is that this change: #15637 which effectively disabled memory sensitive GBK translation using secondary sort introduced by: https://issues.apache.org/jira/browse/BEAM-5392
I propose to restore the memory sensitive translation by considering it first and only if it can not be used consider default GBK translation and it's global window optimisation.
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: