Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"Exceeded soft private memory limit" issue breaks a job #78

Open
MeLight opened this issue Oct 15, 2015 · 2 comments
Open

"Exceeded soft private memory limit" issue breaks a job #78

MeLight opened this issue Oct 15, 2015 · 2 comments

Comments

@MeLight
Copy link

MeLight commented Oct 15, 2015

At a certain point of the mapreduce job on one of the worker_callbacks (/mapreduce/worker_callback/15734955148708B76E8DE-1) I'm getting this error:

Exceeded soft private memory limit of 128 MB with 128 MB after servicing 1261 requests total

And after that on each mapreduce request I get this error:

Traceback (most recent call last):
  File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 240, in Handle
    handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
  File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
    handler, path, err = LoadObject(self._handler)
  File "/base/data/home/runtimes/python27/python27_lib/versions/1/google/appengine/runtime/wsgi.py", line 85, in LoadObject
    obj = __import__(path[0])
ImportError: No module named mapreduce

Until the job dies. Please advise

@tkaitchuck
Copy link
Contributor

I'm not sure about the second error. (Surely it already loaded mapreduce at that point.)
However likely the instance size you are using has too little memory for what you are attempting to do in the mapper. It could either be, you are using more memory than you are aware of. (Perhaps because there are many reduce shards, and the write buffers are taking a lot of ram) Or perhaps your instance size is simply too small, (While it is possible, it is difficult to run an MR in under 128mb.)

@tkaitchuck
Copy link
Contributor

Actually I didn't notice, the 1261 requests. That's a lot. So clearly it is not dying right away. Are those all MR requests? or are you also serving other traffic on the same module? I highly recommend putting MR jobs in their own module.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants