Has any one encountered SQL backup or restore failures that were ultimately diagnosed to a MemToLeave issue? This should only occur on 32-bit SQL Server versions, including 2000 and 2005. A number of other operations could generate a MemToLeave issue, but I am only addressing backup failures here. (the diagnosis should have led to messages in the SQL Logs like: reserve contiguous memory of Size=65536, 131072, or some power of 2 bytes failed).
A search on the MemToLeave topic might point to using the –g switch on sqlservr startup, possibly setting this to 384 or 512, but hopefully, not a higher value. If the only MemToLeave problem is with backups, then there is an alternative to jacking up the –g MemToLeave setting.
The default SQL Server backup allocates (or tries to) a certain number of buffers of a certain size (the BUFFERCOUNT and MAXTRANSFERSIZE settings). The default MAXTRANSFERSIZE is 1M (1,048,576 bytes for the digital deficient). I am not sure what the default BUFFERCOUNT is; it might be 10 or 20. So in a 32-bit SQL Server instance doing various funky things, there very well might not be 10 or 20 contiguous 1M chunks of virtual address space (VAS) in the MemToLeave area (which completely not related to how much physical memory your system has or how much physical memory is available, so do not even mention this, and just shoot anyone who does). When the allocation fails, the backup command will progressively try smaller MaxTransferSize settings, which can take awhile and maybe ultimately failing.
Of course jacking up the MemToLeave addresses this issue, but alternatively one could just decrease the BufferCount or MaxTransferSize. The default settings usually yields decent backup performance on ordinary storage systems (i.e., not the brute force configurations I discussed in an earlier post). It is quite possible that decreasing the BufferCount to 4 and MaxTransferSize to 262144 will not cause much of drop in backup performance, perhaps, 10-20%. If this seems severe, consider the alternative of failed backups with no action or daytime impact with the –g512 setting. Which is more acceptable?
If you use a third party database backup compression software that is multi-threaded, the default maybe X buffers per thread. If your system has 16 cores, and the default threads is one-half of the number of cores, then that might be 80 x 1MB buffers by default.
In case you are the curious type, the BufferCount and MaxTransferSize setting are described in SQL Server 2005 BOL, not in the SQL 2000 BOL, but are mentioned in the SDK optional part of the SQL Server 2000 installation. Everybody installs the SDK part on their personal systems right? It has lots of code samples.
From comments below: At one time LiteSpeed defaulted to n-1 threads, if you have 4 cores, 3 threads was default. The Core2 2.6GHz can compress 150-200MB/sec depending on the data, so if you have 16 cores, 15 threads can do 2.2GB/sec if you can feed it. If you cannot feed the beast, then what is the point of all the threads? If your disks can read just 600MB/s, 4 threads will work fine.
Last I saw, 4.6 was supposed to be 4 buffers per thread on 32-bit SQL. For LiteSpeed, I would recommend just dialing the right thread and buffercount, set threads no more than what your disks can supply, set buffercount to 4 if its not default. Only dial down the maxtransfersize if you still have failures or see the error message in the log.
I do not expect VAS problems on restore because the restore target is often not busy and should not have a fragmented VAS. However, if there VAS related restore failures, then noted that of the 3 backup tuning parameters: threads, buffer count, and maxtransfersize, two are automatically carry over to the restore: threads and maxtransfersize. The tuning that is independent of the backup is buffercount. This can be tuned on the restore independent of the backup setting. If the restore has problems, it may be necessary to tune threads and maxtransfersize on the backup side to lower than what is required from a successful backup.