[Fixed]-Truncated or oversized response headers received from daemon process

1👍

Turned out to be not the actual problem. The problem was lying deeper, as I changed Cairo to CairoCffi and the RSVG-Handler couldn’t deal with the Context-Object coming from Cffi.
No my actuall problem is having an up-to-date python lib that allows me to convert svg into png. Using svg2png from CairoSVG isn’t working for me. I get an

cairo returned CAIRO_STATUS_NO_MEMORY: out of memory

Error, which I’m sure of, that it does not tell the truth again and the problem lies somewhere else. But lets see.

8👍

We recently ran into this issue, and after days of vigorous ugoogleizing and massive headache, we discovered that we were using psycopg2-binary as our database connector dependency (I know, newbs)! It states right in their documentation not to use the package in a production environment.

We did add all the other proposed answers such as adding ‘WSGIApplicationGroup %{GLOBAL}’ to our settings (which we kept), but all of them alone and together didn’t solve the issue.

We also found that other C libraries like numpy, cause issues.

Hope this helps someone some day.

Django Webfaction 'Timeout when reading response headers from daemon process'

http://initd.org/psycopg/docs/install.html#prerequisites

6👍

The code used from Apache by mod_wsgi applies a limit on the size of a single response header returned from mod_wsgi daemon mode processes. This would result in a really cryptic error message from Apache which didn’t at all point to the problem. From memory the previous error was:

Premature end of script headers

The size limit was also hard coded in Apache and couldn’t be changed. This has caused problems for some Python web applications such as Keystone in OpenStack which generates very large authentication headers.

In mod_wsgi 4.1+, the reliance on the Apache code has been removed and the limit is now configurable. The error message is also more specific as you have seen.

The default maximum header size for what is returned from mod_wsgi daemon mode, that is header name and value, is about 8192 bytes. You can override this by using the header-buffer-size option to WSGIDaemonProcess.

Can you indicate what application and what header was so large that the limit was reached as would like to know what other Python web applications besides Keystone are generating such large headers if is a commonly used application.

A second possibility, deriving from the ‘truncated’ reference in that message, is that your mod_wsgi daemon process crashed. You don’t say though that you saw a ‘Segmentation fault’ or similar message indicating that a crash occurred. Check for that and if there are other messages in the error log at the time, then indicate what they were and can look at that as the primary problem.

1👍

I had an issue with this in CentOS 7 server when deploying Django using httpd with mod_wsgi 4.5.4. I had to revert back to using mod_wsgi 4.3.2 which solved my problem.

👤Morfat

1👍

I have installed filebeat which changed my ssl version so that psycopy2 needs to be updated and the error was Truncated or oversized response headers received from daemon process

Do the following:-

Uninstall your psycopy2 package using pip. I am using pip3 because my python version is 3.6.8

sudo pip3 uninstall psycopg2

Reinstall psycopy2 using pip.

sudo pip3 install psycopg2

Before psycopg2-2.7.4 now psycopg2-2.8.3

👤Rajesh

0👍

I suddenly had this same problem after an update. The next update fixed the problem… I run arch, as of the date of this post, the WSGI version in repo works.

0👍

I had the same error message “Truncated or oversized response headers received from daemon process ‘…’: /var/www/dev.audiocont.com/index.wsgi” in my Django project (without any other error message).

My error was that I changed the virtual environment and forgot to adapt the Apache settings “dev.conf” to the new venv path.

0👍

Change Deadlock time out in httpd.conf
I tried everything ,none of the answer work for me .Then i change the deadlock timeout and everything works fine now .Server Goes in to idle state for long processing just change the deadlock timeout.

👤ashu

0👍

I got into the same problem "Truncated or Oversized Response Headers".

I Resolved it by adding,

"WSGIDaemonProcess test user=apache group=apache processes=1 display-name=%{GROUP} header-buffer-size=65536" 

in app.conf/httpd.conf depends upon your configuration file.

Based on your server’s RAM size change the Processes and header-buffer-size. Default is header-buffer-size is 65536

0👍

For me SQLite turned out to be the problem.

I migrated a Django application with SQLite, on a new server, and this error started appearing, and the HTTP requests hanging.

I manage to solve the issue with:

WSGIApplicationGroup %{GLOBAL}'

But, I wanted to get to the bottom of the problem, as this application was not using any different python modules than other applications on the same server.

I realized that the only difference is the SQLite database, and after migrating from SQLite to Postgres, the problem went away.

In the past, I’ve had other grievances with using Django with SQLite, so I would advise against using SQLite for anything that needs to go into production.

Leave a comment