I am using large datasets (200,000 features ~100Mb) loaded through the shapefile datasource.
I have had it work on smaller datasets quite well but it seems to die if I go large. Any ideas on why this might be happening and if there is anything that I can do to get around this?
Here is the stack trace. It dies in New IO:
java.io.IOException: Not enough storage is available to process this command
at sun.nio.ch.FileChannelImpl.map0(Native Method)
CodeHaus Comment From: ianschneider - Time: Thu, 6 May 2004 14:37:05 -0500
The error occurs in native code and is mapped from the underlying OS.
Not enough storage is available to process this command.
Do one of the following, then retry the command: (1) reduce the number of running programs; (2) remove unwanted files from the disk the paging file is on and restart the system; (3) check the paging file disk for an I/O error; or (4) install additional memory in your system.
CodeHaus Comment From: jgarnett - Time: Thu, 6 May 2004 16:47:09 -0500
Nevertheless we have to figure out something? Right now they are partitioning the data (the machine has 1.5 gigs, and JUMP manages to load this information).
Bleck, I noticed your bug reference was from Windows2000 - will a different operating system help? WindowsXP or Linux?
CodeHaus Comment From: aaime - Time: Fri, 7 May 2004 02:09:05 -0500
This thread on sun java forum may be of interest... it seems that this error is not easy to reproduce even on windows...
CodeHaus Comment From: - Time: Fri, 7 May 2004 05:54:45 -0500
I remember having a problem similar to this.
If I remember correctly (it was a while back) we were running out of virtual memory address space on NT4 and 2K boxes. MappedByteBuffer was not being correctly released and the address space could never be reclaimed. If we opened and closed one of our shapefiles(300mb+) a few times it would barf with that error. The only thing we could do was to ensure we didn't reload shapefiles and made sure we only loaded 2gb at most. In the end we scrapped MappedByteBuffers for normal ones. Never tested on Linux.
Turning on aggresive heap can fix other problems associated to memory allocation for large files.
CodeHaus Comment From: jgarnett - Time: Fri, 7 May 2004 19:39:45 -0500
IanS has kindly fixed this issue by providing a parameter that allows the user to disable the use of MappedByteBuffer. This parameter shows up in the GeoServer DataStore definition screen allowing end users to deal with the problem.
The only further thing we could do is "detect" when we need to switch behind the seens. Although some of the comments indicate that this is a time dependent thrashing of the heap.