SceneKit & shaders

Note: Since writing this originally I’ve figured out how to get at least some aspects of shaders working.  I’m still trying to flesh out the rest.  I’ll post an update or revised entry at some point.

Ugh.  SceneKit supports shaders, technically.  But it’s almost unequivocally unusable.  Let me count the reasons:

  1. It doesn’t support geometry shaders at all.
  2. There’s absolutely no example code anywhere on the web.  Certainly none from Apple.
  3. There’s essentially zero documentation.  Not even from 3rd parties.
  4. You must set both a fragment and a vertex shader on every SCNProgram.  This is actually documented, though poorly, and the behaviour is certainly not intuitive.  It’s not hard to write a “no-op” version of either, but it’s stupid and it exacerbates all the other issues below.

Those are just the warnings.  I should have paid heed.  Instead I decided to reverse-engineer it anyway.  Turns out it was a waste of time, because:

  1. Once you start using shaders, you lose all the functionality of SceneKit.  The most obvious being textures, but it’s actually everything – geometry, projections, positions – everything.  Now you must load and bind everything manually.  Vertices must be fed into your shader manually.  Everything is manual.  No intrinsics, no built-ins.  Stupidly simple things like gl_Color and gl_ModelViewProjectionMatrix are considered arbitrary and treated as uniforms, that must be bound manually.  And to do so you have to implement an Objective-C delegate method, so your performance is going to be shit unless they utilise IMP caching, in which case it’s merely going to be terrible.  I don’t understand how this could possibly be made to work.
  2. Your shader is compiled not only once for every single material it’s used on, but plus 50 percent.  i.e. if you create a single SCNProgram and assign it to 100 distinct SCNMaterials (each on their own unique SCNNode), SceneKit will compile both the vertex & fragment shaders 150 times each.  This only makes sense when you consider it in the context of the above observation; all the SceneKit objects your shaders happen to hang off are completely irrelevant.  So of course each use is treated as a completely independent shader, because each one has to be completely setup manually.  Though I have no idea what the extra 50% are about.
  3. Since your shader is not shared between materials or nodes, it gets set up and torn down once for every single drawable.  Every SCNNode.  So you get ~twenty OpenGL calls minimum per drawable.  Admittedly some of these are SceneKits usual drawing commands, but most are related to the shader, and include gems like resetting the blend function every single time, and glMatrixMode + glPopMatrix + glMatrixMode + glPushMatrix in sequence, repeated, for the same matrix & value (the identity matrix, naturally).

As far as I can tell, if you want to use any shaders at all, you can’t use SceneKit.

5 thoughts on “SceneKit & shaders”

  1. “1” is not completely right. Once you use custom shader it indeed overrides SceneKit’s shaders. So yes you loose the lighting and texturing. What you would like here is shader injection, not just custom programs.
    However you don’t need to (and shouldn’t)  load & bind the geometry, position, projection etc… yourself. the API to bind SceneKit’s attributes is:
     
    SCNProgram:
    - (void)setSemantic:(NSString*)semantic forSymbol:(NSString*)symbol options:(NSDictionary *)options;

    for instance to bind the vertex to your custom uniform named “a_position” in your shader:
     

    [myProgram setSemantic:SCNGeometrySourceSemanticVertex forSymbol:@"a_position" options:nil];
     

    This is trivial for attributes(normals, texcoords, vertex) and the supported uniforms (model/view/proj/normal matrices) but harder for textures (you will have to load the texture yourself using openGL and here use the delegate methods as describes in the blog to bind it).
     
    also note that variables like gl_ModelViewProjectionMatrix, gl_Color are deprecated.
     
    2 and 3. The right thing to do is to share the material (not the program!). that way the shader will be built just once. Since a SCNProgram overrides the material settings, there is no reason to not share the material.

    I agree that the lack of documentation and sample code (for this part of the API in particular) is something that must be fixed.

    Reply
    • Yeah, that’s quite right – that’s what I’ve started to figure out since. I was hoping that the last piece – getting the material attributes from the SCNMaterial into the fragment shader – would come together, but you seem to be saying you have to do all that manually. :/

      I wondered if there was some “new world order” in OpenGL of not using intrinsics (e.g. gl_Color) in the face of more generic systems involving shaders. But that’s not the case today with run-of-the-mill OpenGL and GLSL code, so I’m a bit frustrated by it – I don’t really care what system SceneKit wants to use, but if it’s not the pervasive one it really should be documented.

      Reply
    • One big problem though, even if you do buy into SceneKit’s model: how to determine what geometry is being processed by your SCNProgram, in order to determine what values to bind to it (e.g. colour, textures, etc). The only identifying thing given to the SCNProgramDelegate implementor is the SCNProgram. So you have to create a separate SCNProgram, explicitly, for every object, potentially. And hope that SceneKit doesn’t implicitly copy them at any point and break that tenuous link.

      Or, I suppose, create a separate delegate instance for every variation you need. That seems like a hack (and a lot more work).

      Reply

Leave a Reply to Mark P M Lim Cancel reply