Monday, March 19, 2012

Decreased performance from using blobs

I'm trying to store files such as pdfs in my SQL Server as blobs for a particular .NET application and I'm noticing a rather large performance hit when compared to if the pdf was simply stored in the web server's file system.

I have a pdf of 7MB. If this is on a web server it takes roughly 9 seconds to download. When I store the same pdf in my database and then write it onto the page using the code below it takes roughly 40 seconds to display. Can anyone help me to improve this performance difference? Should I not be using this method to store large files?

Dim PdfCol As Integer = 0 ' the column # of the BLOB field
Dim dr As SqlDataReader = cmd.ExecuteReader()

If dr.Read() Then
Dim mypdf(dr.GetBytes(PdfCol, 0, Nothing, 0, Integer.MaxValue) - 1) As Byte
dr.GetBytes(PdfCol, 0, mypdf, 0, mypdf.Length)
End If

Response.Clear()
Response.Buffer = True
Response.ClearContent()
Response.ClearHeaders()
Response.ContentType = "application/pdf"
Response.BinaryWrite(mypdf)
Response.Flush()
Response.Close()* Turn off output cache, so that the first data can be sent immediatly.
* Do NOT load the whole thing into memory and send it out. Instead grab little chunks and thend them. Start with 8k, then take like 64k a trip.
Currently you only start sending data once the hole 7mb are loaded from the database, and this is ridiculously inefficient.|||By output cache, do you mean the output in the Page/UC directive?
Also, could you show me an example of grabbing chunks and sending them out? I'm not real experienced with using bytes.

I tried the following but I get a dialogue window that says "The file is damaged and can not be repaired."

If dr.Read() Then
Dim intBufferSize As Integer = 64000
Dim longStartIndex As Long = 0
Dim longRetVal As Long
Dim b(intBufferSize) As Byte

'Read through the first chunk of the document
longRetVal = dr.GetBytes(PdfCol, 0, b, 0, intBufferSize)

'Reset the start index.
longStartIndex = 0

Response.Clear()
Response.Buffer = True
Response.ClearContent()
Response.ClearHeaders()
Response.ContentType = "application/pdf"

'Continue reading and writing while there are bytes beyond the size of the buffer.
While longRetVal = intBufferSize
Response.BinaryWrite(b)
Response.Flush()

'Reposition the start index to the end of the last buffer and fill the buffer.
longStartIndex += intBufferSize
longRetVal = dr.GetBytes(PdfCol, longStartIndex, b, 0, intBufferSize)
End While

'Write the remaining buffer.
Response.BinaryWrite(b)
Response.Flush()
Response.Close()
End If|||Ok, I've managed to write my blob in chunks to a specified path on my hard drive using the code from: http://support.microsoft.com/default.aspx?kbid=317043
But whenever I try to use a Response.BinaryWrite so that the pdf is loaded onto the web page with ContentType set to "application/pdf" or "application/octet-stream" I get the same error that the file is damaged and cannot be repaired.|||Thona you are awesome! I forgot to mention that earlier :)

I did finally get this to work. Apparently if my buffer size is too large the file gets corrupted. I set my buffer size to 24000 and everything seems to work ok, but if the buffer size is 29000 then the file gets corrupted. Anyone know why this is? Can bytes only be a certain size?

No comments:

Post a Comment