I have added 2 lines in /etc/security/limits.confmyuser soft nofile 16384 myuser hard nofile 16384 …which has no effect: su – sysctl -p su myuser ulimit -n 1024 It is important that this comes into effect without the user having to log in first, i.e. as root I start a script on his behalf. Asked by recalcitrant Add this to /etc/security/limits.conf: * soft nofile 16384 * hard nofile 16384 And add something like this to /etc/profile [...]Continue Reading »
I’m banging my head on this, and I can’t understand why it’s not working. I’m hoping that someone can shed light on this, or failing that, give me some suggestions for avenues of investigation. I’ve got a Red Hat 7.3 system (don’t ask) where it’s desirable to increase the open files limit for the wls81 user. I thought that I was just not able to control it, but it increasingly looks as if I’ve only [...]Continue Reading »
I require more than 1024 clients to be connected to a single redis instance at once. My redis process runs as user ubuntu. I have edited /etc/security/limits.conf to specify: ubuntu soft nofile 65535 ubuntu hard nofile 65535 I have also ensured that the maxclients parameter in redis.conf is commented out. What other steps must I take to ensure more than 1024 clients can connect to my redis instance, or is that all? Thanks! Asked by [...]Continue Reading »
we believe we have increased the max open file descriptors for the root user. This was done by adding this line to /etc/security/limits.conf: * – nofile 2048 We think we’ve confirmed that the root user’s limit was increased because we can tell (not described here) that our application (solr – which is run by root) has 1098 files open. However, we can’t tell for sure how many open files the root user is allowed. We [...]Continue Reading »
This question is similar to set open_files_limit, but there was no good answer. I need to increase my table_open_cache, but first I need to increase the open_files_limit. I set the option in /etc/mysql/my.cnf: open-files-limit = 8192 This worked fine in my previous install (Ubuntu 8.04), but now in Ubuntu 10.04, when I start the server up, open_files_limit is reported to be 1710. That seems like a pretty random number for the limit to be clipped [...]Continue Reading »
I’m running a Ubuntu 10.04 (lucid) samba fileserver. I have a Windows 7 client which opens a large number of files while doing a copy of thousands of tiny files at once. It receives the error “Too many open files” at which point waiting a few seconds and clicking “Try again” resumes the download. I’ve found a number of references that say to increase the number of open files available to Samba to solve the [...]Continue Reading »
How can lsof report more open files than what ulimit says is the limit? prod_web3(i-ca0b05aa):~$ sudo lsof | wc -l 4399 prod_web3(i-ca0b05aa):~$ ulimit -n 1024 From the ulimit builtins man page The ulimit builtin provides control over the resources available to the shell and to processes started by it on systems that allow such control. Your lsof command lists all of the open files for all processes for all users on the system. You are [...]Continue Reading »
Background: I’m playing around with monitoring the ulimit for running processes for a particular user. (I had occasionally seen processes that were getting started with an incorrect limit.) I asked a couple self-professed Linux gurus, and one suggested lsof -p <pid>, while the other suggested ls /proc/<pid>/fd, but neither was positive about which more accurately reflects the actual count towards the max open files limit for a process. So, which is it? lsof -p <pid> [...]Continue Reading »
The default nofile limit for OS X user accounts seems to be about 256 file descriptors these days. I’m trying to test some software that needs a lot more connections than that open at once. On a typical debian box running the pam limits module, I’d edit /etc/security/limits.conf to set higher limits for the user that will be running the software, but I’m mystified where to set these limits in OS X. Is there a [...]Continue Reading »
We recently began load testing our application and noticed that it ran out of file descriptors after about 24 hours. We are running RHEL 5 on a Dell 1955: CPU: 2 x Dual Core 2.66GHz 4MB 5150 / 1333FSB RAM: 8GB RAM HDD: 2 x 160GB 2.5″ SATA Hard Drives I checked the file descriptor limit and it was set at 1024. Considering that our application could potentially have about 1000 incoming connections as well [...]Continue Reading »
- Cron expression that runs every 5 minutes from 1:30 am – 6:00 am [duplicate]
- Understanding redundant power supplies
- Is there a way for administrators to disable users from installing Firefox extensions?
- Is there research material on NTP accuracy available?
- How to create a limited “domain admin” that does not have access to domain controllers?