I don't know how this works exactly, but I do know a bit about software programming. Wouldn't Apple, with their own implementation of OpenGL, be able to improve the performance of their drivers even for older versions of OpenGL? The API is standardized, but afaik the underlying implementation could change without breaking things. If they improve their drivers even OpenGL 2.1 implementations could benefit from it I think.
The Iris Pro has 40 shader cores * @ 200-1300Mhz and 1GB memory. The 750M has 384 shader units @ 967 MHz and 2GB of memory. In my benchmark I'm getting 9 layers of DXV decoding. That's probably 9 shader cores being used. I would guess the memory is used even less.
(*) edit: since learned that 40 cores is actually 160 shader units.
In a real world applications with loads of effects and possibly external graphics rendering hooked up to Resolume through Syphon, of course you'd want to use a beefier GPU. But just looking that the benchmark results I'm not sure where the bottleneck is. Possibly those 375 remaining shader cores of my 750M are in idle state when I run the benchmark. The bottleneck could be CPU speed, memory bandwidth or disk access...
Maybe Joris has an idea about this? Is DXV always using a single shader core per layer for decoding?
Macbook integrated vs discrete gpu. No gain.
Re: Macbook integrated vs discrete gpu. No gain.
Last edited by 0x80 on Fri Jul 25, 2014 13:49, edited 1 time in total.
Re: Macbook integrated vs discrete gpu. No gain.
Have a look at this site and you understand shaders and cores much better 
http://antongerdelan.net/opengl/shaders.html
Then it also becomes clear why a GT750 is better then a Iris Pro.

http://antongerdelan.net/opengl/shaders.html
Then it also becomes clear why a GT750 is better then a Iris Pro.
HIVE 8 | Quantum Laser | http://www.hive8.com
Re: Macbook integrated vs discrete gpu. No gain.
Thanks for the info. I wasn't aware that all cores are used in parallel in the rendering pipeline. Now that I re-read Joris' reply that's what he said already about DXV. Ok, clear!

I still don't agree. This doesn't change the point I am trying to make. The performance bottleneck in the benchmark might well be something other then raw GPU power. Soon I will have benchmark results of an Iris Pro only machine and then we'll know more.Then it also becomes clear why a GT750 is better then a Iris Pro.
Re: Macbook integrated vs discrete gpu. No gain.
I can't wait to see your test results!
If you can, please test with various configurations of outputs and make sure you do a test with the three outputs (two DP and one HDMI) all in 1920x1080.
Also if you have any MST hubs, ZOTAC Mini-DisplayPort to Dual HDMI or Matrox DH2G/TH2G available include them in the test to get a sense of how the system handles higher resolutions with passive or active multi-display adapters.
If you can, please test with various configurations of outputs and make sure you do a test with the three outputs (two DP and one HDMI) all in 1920x1080.
Also if you have any MST hubs, ZOTAC Mini-DisplayPort to Dual HDMI or Matrox DH2G/TH2G available include them in the test to get a sense of how the system handles higher resolutions with passive or active multi-display adapters.
-
- Posts: 388
- Joined: Sat Oct 29, 2011 22:24
- Location: Las Vegas, NV
Re: Macbook integrated vs discrete gpu. No gain.
Iris Pro might be ok for some basic VJ/video work, but don't think it's ready for real performance.
You might be able to playback video by itself and layer it fine, but what about things that require OpenGL drawing usage like effects?
Get a hold of the Plexus plugin and see how far you can turn the settings up before it crashes.
Almost any GTX will kill Iris Pro in pixel/shader rendering, which is where it matters.
http://www.game-debate.com/gpu/index.ph ... 00-desktop
You might be able to playback video by itself and layer it fine, but what about things that require OpenGL drawing usage like effects?
Get a hold of the Plexus plugin and see how far you can turn the settings up before it crashes.
Almost any GTX will kill Iris Pro in pixel/shader rendering, which is where it matters.
http://www.game-debate.com/gpu/index.ph ... 00-desktop
Re: Macbook integrated vs discrete gpu. No gain.
We were talking about the Iris Pro is context of the current Macbooks. So it makes only sense to compare it with the 750M in this case:
http://www.game-debate.com/gpu/index.ph ... -2gb-gddr5
It was never my point to prove that the Iris Pro is in the same league for more complicated fx and 3d rendering. For an upcoming production I need to output about 6 layers of 1080 with some masking and simple coloring. So I'm finding out if the Iris Pro is capable enough, and so far I think it is. Tonight I'll have the laptop for testing.
Also I started this thread to discuss the benchmark results between the two cards and that is still an interesting subject because it makes you wonder where Resolume's performance bottleneck is in terms of pure layering of video content.
http://www.game-debate.com/gpu/index.ph ... -2gb-gddr5
It was never my point to prove that the Iris Pro is in the same league for more complicated fx and 3d rendering. For an upcoming production I need to output about 6 layers of 1080 with some masking and simple coloring. So I'm finding out if the Iris Pro is capable enough, and so far I think it is. Tonight I'll have the laptop for testing.
Also I started this thread to discuss the benchmark results between the two cards and that is still an interesting subject because it makes you wonder where Resolume's performance bottleneck is in terms of pure layering of video content.
Last edited by 0x80 on Fri Jul 25, 2014 21:37, edited 1 time in total.
Re: Macbook integrated vs discrete gpu. No gain.
So I've finally done real world tests with a friends MPB. I posted the benchmark results here: viewtopic.php?f=11&t=11093&start=20
My theory is proven wrong
Clearly the game changes completely when outputting to an external display, like Oaktown suggested earlier. The Iris Pro doesn't hold and is only capable of doing 3 layers of 1080. So it still matters a lot if you use integrated or discrete graphics even when the application is only decoding and layering video files. I am guessing the performance bottleneck is the memory bandwidth, given that the 750M has 3x that of the Iris Pro.
I noticed in an older report with the same hardware that 4.1.8 performed a little better then my 4.1.11 setup, so I added an extra benchmark to confirm that.
I noticed that the FPS can fluctuate quite a bit, and after a few minutes it seems to stabilise more and usually go up a few frames. I don't understand how that works but it makes it difficult to estimate the average FPS after the 20+, so those numbers are not too accurate.
Enough of this benchmarking already. I've become a bit wiser. Hope all this is helpful to others as well.
My theory is proven wrong

I noticed in an older report with the same hardware that 4.1.8 performed a little better then my 4.1.11 setup, so I added an extra benchmark to confirm that.
I noticed that the FPS can fluctuate quite a bit, and after a few minutes it seems to stabilise more and usually go up a few frames. I don't understand how that works but it makes it difficult to estimate the average FPS after the 20+, so those numbers are not too accurate.
Enough of this benchmarking already. I've become a bit wiser. Hope all this is helpful to others as well.
