I can only assume that the "image *" (pointer to type 'image') is some sort of structure used by OpenCV? If that is indeed the case, then it would be rather dangerous to pass that into a ring buffer as the data gets written into it strictly in a binary sense, i.e. it does not invoke OOP/insertion functions. The cast to (void *) alone tells you that you are now meddling with the internal structure of the struct or class itself. If, on the other hand, the "image" is nothing more than a typecast for a standard type, say, "char" or "unsigned char", then you are safe.
Both options should have been identical in CPU utilization - in both cases, IMAQ will directly write to the buffers treating it as nothing more than a simple storage bin of bytes. The fact that you see a 50% increase in using "image *" probably reflects then some OOP process, namely, a function call, being interposed. Not sure how this is done/possible as IMAQ has no idea what an "image *" is. On the other hand, if you are instead using imgSessionExamineBuffer() to lock a buffer, and then copy that buffer vis memcpy() or using a member function of 'image', then that's where the CPU utilization will spike. The OOP methods, unless it's a static function, incur a function call penalty for each access, and this can get extremely busy if you are using accessors/mutators like good OOP practice dictates.
My advice would be to try as much as possible to stream data to buffers which you can then hopefully directly attach to OOP objects. See if 'image' type, if not native, has methods that would permit you to attach a simple (char []) or (unsigned char[] - note that this is pretty much identical in C speak to char * or unsigned char *) to use as it's memory source.
As for "casting" from 10-bit to 8-bit, this is simply not possible. A 10-bit image capture requires 2-bytes per pixel, whereas 8-bit only requires 1. First, assuming you could live with the reduce dynamic range (8-bits permit values from 0..256, 10-bit from 0..1024), you would have to decide what you wish to lop off. Let's say you have dim images, and all values are <=256. If this is the case, then you only need the lower byte. To see why casting is not possible, you would need to read every _other_ byte (since each pixel in your original acquisition required 2 bytes), skipping the "high" byte of the 2-byte word (assuming PC architecture, big endian). Alternatively, you could choose to read the high byte, skipping every other byte but starting at offset 1 instead of 0. Either way, you see it's not trivial to convert from 10-bits to 8-bits, and this is not even considering the loss in dynamic range, e.g. if you had values between, say, 120 through 500 - how would you store that into a byte that only holds 0..256? Even if you did a baseline subtraction (expensive - every pixels needs to get subtracted) you would still end up with values from 0..480.
Your best option would be to find routines that can deal with 16-bit images (there are hard to find, only TIFF/PNG supports 16-bit grayscale for storage, and VS.Net I just found out lets you create objects for them, but you can't save/display/print them). Then it's a trivial matter to "cast" from 10-bit to 16-bit, as you have increased dynamic range (no truncation to worry about) and it's all the same to the system - 2 bytes per pixel. You just need to make sure that the function is expecting 16-bit (2-byte or word sized) values, e.g. short wordArray[]; __int16 wordArray[]; System::Int16 wordArray[]; etc. depending on your platform/compiler/etc.
Good luck.
--
[System info: NI-1429e running in 'Base' CL-mode plugged into an x4 PCI-e slot on a Dell PowerEdge 1800, dual 3.2Ghz Xenon, 6GB RAM, Windows 2003 Server SP1, LV8.0/7.1, IMAQ v3.5, Dell CERC SATA RAID controller card with 4x250GB Seagate HDD, one Seagate 250GB HDD connected to system's primary SATA port for OS.]