Perhaps sampler shaders could be implemented? Basically, when a sampler shader is applied it would allow for programming exactly what type of filtering is applied when a texture image is retrieved. Samplers seem quite fixed-function to me as they rely on various state settings and so on. (Sorry for not providing more in terms of citation here.) I’ll leave this section short since AAA developers already want this feature. Frostbite 2 (Battlefield 3) guys seem to want this as well. I don’t remember where I read this from specifically though, sorry.) So, I assume that means hardware is capable of performing it.
(Someone at Unity discovered this actually. Why is this still fixed function? I’ve noticed that in OpenGL ES, on NVIDIA Tegra hardware, whenever you change the fixed-function alpha blending state, the shader bytecode is regenerated. (No new entry points, one added enum value, and some description.) At least, it would be easy to add to the specs. This seems like it would be easy to implement. If false, then the hardware can’t, so do something else. If true, then the hardware can execute the GL commands within the query statement. Then, check the query for a boolean value. e.g., issue an Is Hardware query for a set of OpenGL commands using the current state. That way, in my code, I can choose to avoid that particular feature and perform a work-around. However, I also want to find out what falls back to software.
#Opengl 5.0 driver
Now, believe me, I like that OpenGL forces the driver to support some things, even if it has to fallback to software to do so. These feature sets are useful to determine what the driver has support for. D3D10+ has “feature levels” and “feature sets” that you can check. In this day and age where virtual texturing is starting to become common place (at least in one form or another) the ability to just directly access texture memory is becoming increasingly more important.įor reference, see GL_AMD_pinned_memory, GL_NV_bindless_texture, and GL_NV_vertex_buffer_unified_memory.ĪPI To Determine What’s Supported By Hardwareĭ3D9 and below had GetDeviceCaps(). I’ve heard they offer a bindless solution as well, but I haven’t bothered verifying this. These are the two big players in the game. NVIDIA and AMD offer separate but (as far as I can tell) equivalent solutions for accessing textures without binding them, saving a lot of unnecessary driver overhead. Here’s a comparison between D3D9 bytecode shader loads, and OpenGL shader loads (among other things). D3D worked with bytecode all the way back to when shaders were first introduced. This extension should be propogated back all the way to the first hardware that’s capable of executing shaders. It would probably be easier too since everything’s in a bytecode format and easy to optimize/interpret. However, there’s no reason the bytecode format couldn’t be converted to a binary by the GL in the same way to produce a binary optimized for the hardware. Some people will argue that the GL_ARB_get_program_binary extension allows you to produce binaries that are optimized for the hardware instead of just in general. It’s kind of funny actually, I can compile GLSL to HLSL bytecode. I can implement my own HLSL compiler and not be locked in to Microsoft’s implementation if I want to.
Maybe it’s in the format of GL_BYTECODE_FORMAT or something similar. Once you have the bytecode, just use the existing APIs to apply the binary. It doesn’t matter how the bytecode looks, how complicated it is, or anything like that as long as it’s functional. Make it an extension at first, then make it part of the official spec.
#Opengl 5.0 Offline
Offline GLSL compilation can be accomplished, by more than just implementors of OpenGL, if an official bytecode exists, and is maintained. You’re locked to one company to provide updates to a closed source compiler. Yeah, you can use Cg but that’s no better. Maybe there’s a bug in the compiler implementation. Maybe the driver implementation of the compiler is flawed. 2: You don’t get the best optimizations you could.
#Opengl 5.0 drivers
1: Slower start-up time whenever hardware or drivers change. As it stands currently, you must distribute the GLSL with your app (not a big deal) then waste the user’s time (in addition to the time it takes to perform an install) to recompile a shader that is perfectly capable of being done offline. Why? Offline compilation with ALL optimizations on and third party compilers. There needs to be an official bytecode format that specifies the binary. Okay, this is something that’s been requested before, but it has been ignored. Offline GLSL Compilation and Official Bytecode Hello, here are a few suggestions of things I would really like to see in OpenGL.