Coming back from vacation with new energy and inspiration

Yesterday I came back from my four week summer vacation, and I came with a lot of new energy and inspiration. Vacations are actually a great for regaining inspiration and giving your mind time to process all the things you encountered during a year of hard work. While working intensively, it may be hard to break habits and you may find yourself struggling with allocate the required time to learning something new.

For the first two weeks, I was still in the same mindset as when working. I started reading a book about software construction practices and though about ways to improve my Objective-C coding. While the book was very interesting, it couldn’t really think of any concrete areas of my coding conventions I wanted to improve. Not because I though there weren’t any, I just couldn’t identify them. As the first week passed, I slowly started sinking into vacation mode. Being abroad and spending warm summer nights enjoying good food and beer together with my girlfriend definitely helped that process.

For the coming two and a half weeks, I didn’t really think much about work or programming. I stopped reading the book and spent a lot of time with family and friends. Not until the very last few days my development brain woke up again, this time with lots of new inspiration. From reading a few blogs posts from experienced developers, I quickly identified two concrete question I needed to investigate.

  • Core Data has been around since 2005 and is the result of many years of development and refinements by experienced Apple engineers that know their platform best. For what reason am I not using it?
  • (Mis-)use of the singleton pattern is notorious way of introducing coupling and lower flexibility. For what reason am I using it?

These question were not trivial to answer, so while investigating them, I came up with another two questions that I imagined would be easier to answer.

  • ARC has been introduced by Apple to help developers. Is there any reason not to make use of it?
  • ASIHTTPRequest has been deprecated. Are there viable alternatives?

I turns out I was able to answers to all four questions after some reading, testing and brainwork. My finding will be discussed in a later post.


0 Comments

Sprite Batching in OpenGL ES 2.0

Batching geometry is a basic and very common technique for optimizing OpenGL ES 2.0 code. It works by grouping objects together and drawing them in a single draw call instead of one by one. Having a large number of draw calls may imply a serious bottleneck and reducing that number may significantly increase performance. The reason why the number of draw calls matter is that the CPU has to prepare and configure the GL pipeline before each call, and that’s quite an expensive operation.

In this post I’m going to focus on batching sprites, due to their simplicity. A sprite consists of six vertices (two triangles) with a texture applied to it, so it’s used for all almost all graphics in 2D games. It’s also used frequently in 3D games for things such as as particle effects that would be very expensive to compute using real 3D geometry. Another advantage of sprites over 3D geometry is that since OpenGL is very good at interpolates textures, they usually appear smooth and don’t have the jagged edges that 3D objects may suffer from.

The method I’m using here is probably just one of several ways to accomplish batching. I didn’t really read about this method anywhere, I just figured that this is a good way of doing it. Feel free to suggest alternative methods. I’ll be happy to hear about them!

Basic sprite shader

We start by looking at a sprite shader which no support for batching. The vertex shader is very simply. All we need as input is the model-view-projection-matrix, position and texCoord. The gl_Position is calculated by transforming the position according to the mvp matrix, and the texCoord is interpolated for the fragment shader through it’s varying variable.

Vertex Shader:

uniform mat4 u_mvpMatrix;

attribute vec4 a_position;
attribute vec2 a_texCoord; 

varying vec2 v_texCoord;

void main() {
    v_texCoord = a_texCoord;
    gl_Position = u_mvpMatrix * a_position;
}

The fragment shader samples the given texture and the varying texCoord and sets the gl_FragColor.

Fragment Shader:

precision mediump float;

uniform sampler2D s_texture;
varying vec2 v_texCoord;

void main() {
    gl_FragColor = texture2D(s_texture, v_texCoord);
}

A limitation here in the vertex shader is that all vertex attributes defined in this draw call needs to use the same mvp matrix. The mvp matrix defines the position, rotation and scale of the vertices which means that if we want to draw multiple sprites at different positions using this shader, we have to redefine the uniform mvp matrix for each sprite. This can’t be done in the middle of a draw call so we have to use multiple draw calls, which we want to avoid!

Sprite shader with batching support

So, looking at the basic sprite shader above, how can we extend it to use different mvp matrices for each sprite? The answer is to define u_mvpMatrix as an array of matrices. The total space available for uniforms and attributes in OpenGL ES 2.0 is limited, so we have to limit the size of the array, how large you can make the array depends on how many additional attribues and uniforms are used. A size of 24 should be quite close to the minimum limit defined in the specification.

When using an array for the mvp matrix, we also need to define the index for each vertex. This is accomplished by using an additional attribute. OpenGL ES 2.0 has no support for integer attributes, so we’re bound to using a float data type that is rounded in the shader.

Our new shader looks like this:

uniform mat4 u_mvpMatrix[24]; 

attribute float a_mvpMatrixIndex;
attribute vec4 a_position;
attribute vec2 a_texCoord;

varying vec2 v_texCoord;

void main() {
    int mvpMatrixIndex = int(a_mvpMatrixIndex);

    v_texCoord = a_texCoord;
    gl_Position = u_mvpMatrix[mvpMatrixIndex] * a_position;
}

We don’t have to make any modifications to the fragment shader, so it stays the same as before:

precision mediump float;

uniform sampler2D s_texture; 

varying vec2 v_texCoord;

void main() {
    gl_FragColor = texture2D(s_texture, v_texCoord);;
}

This change requires you to send the a_mvpMatrixIndex attribute for each vertex. All six vertices belonging to the first sprite should have an index of zero, the second sprite should have index of one, and so on. The a_mvpMatrixIndex array should look something like this:
0.0,0.0,0.0,0.0,0.0,0.0,1.0,1.0,1.0,1.0,1.0,1.0,2.0,2.0,2.0,2.0,2.0,2.0,3.0…

Summary

By using the method above, you can draw 24 sprites with each draw call, each with a unique position, rotation and scale. All sprites have to use the same texture though, so if you want to use different images for each sprite, you will have to use a sprite sheet which is a single texture with multiple images in it.

If you want to use sprite batching to do draw even more than 24 sprites in a single call you could for instance decrease the used uniform space by requiring all sprites to have constant scale or rotation and only vary by position. In this way, you could use a vec4 u_position[96] to draw 96 sprites with each call. Another option would be to make the mvp matrix multiplication before sending it to the shader. In this way you could draw a virtually unlimited number of sprites with each call, but I’m not sure this is a very good idea performance wise since it increases the load on the CPU before each call, and the GPU is usually a lot better at handling matrix multiplications.

So that’s it. Feel free to comment with suggested improvements or alternatives to this method! This is the method I’ve implemented in Rend, my OpenGL ES 2.0 framework for iOS.


0 Comments

Introducing Rend – A lightweight Objective-C OpenGL ES 2.0 framework for iOS

This post is about Rend, my OpenGL ES 2.0 iOS framework available on github.com/antonholmquist/rend-ios.

When first I started using OpenGL ES 2.0 in my iOS projects more than a year ago I didn’t use a framework. I sure knew that there were several good and mature options out there, but I was really interested in knowing the inner workings of OpenGL so I started off writing GL code from scratch. That was a great way of getting to know some of the core concepts, but I quickly realized that there has to be a better and more flexible way of doing these things. In that project, I also instantly fell in love with shaders which enabled us to create some really amazing effects.

When looking for a framework to use for upcoming projects I looked at three existing options, Cocos2D, Cocos3D and Unity. None of those seemed perfect for my needs. Cocos2D obviously doesn’t have very good 3D support, Cocos3D didn’t have shader support, and Unity seemed too bloated and hard to integrate with UIKit.

So on my summer vacation last year I decided to start creating my own framework. I bought two books, OpenGL ES 2.0 Programming Guide and OpenGL Shading Language and read them thoroughly. My girlfriend thought I was crazy reading these books on the beach in Croatia, but that’s just how I work. I’ve tried reading novels in the past but they just don’t seem to stimulate me enough.

So there I was on the beach going through everything from camera matrices to lighting shaders trying to get a feel of how everything fit together. When we got home I started coding. The books didn’t really contain much about scene graphs which are a vital part of a usable framework, so I turned a lot to Cocos2D/Cocos3D for reference. I also decided to included the entire Cocos3D math library to avoid reinventing that particular wheel.

My focus has been to create a framework that focuses on rendering and that could easily be integrated with UIKit. Unlike Cocos and Unity, it doesn’t have any support for actions, physics or sound, but it’s can do pure rendering very well. I’ve spent quite some time profiling my projects trying to get rid of the worst bottlenecks in the framework, so it should be quite fast when used in a good way.

During the past year, the framework has been used in numerous projects and has constantly evolved. Whenever I needed a new feature I implemented it and that has thought me a lot. The framework contains some classes for doing basic stuff like sprite batching and Wavefront OBJ loading, but I believe the real power lies in the ability to extend it to whatever you need.

There are two provided examples in the repository. The first one is a simple rendering of a teapot, and the second is shown in the video below. It shows how the framework can be used for per-pixel-lighting, bump mapping and some basic animation. Big thanks to Develophant for adding the example source to the project!

Well, that’s some brief background to Rend. I will probably write a few posts later where I explain how to use. As mentioned earlier, the source code is available on GitHub so feel free to check it out!


17 Comments

I want to learn JavaScript!

Walking home from work today, I realized that I really want to learn JavaScript. And by learning it, I mean understanding it. I’ve done some JavaScript during the past 10 years, but I can’t say I had much respect for the language. I though JavaScript was an old web-relic from the mid 90s that has survived for no apparent reason. That’s not the case. It has appeared to me that it’s a powerful language.

Two things that are really hot right now are WebGL and node.js, both of which are powered by JavaScript. Lately I’ve been playing around a bit with node.js and it seems sooo cool, but I feel hampered by not knowing the language! I don’t know how to structure it, I don’t know how to read it and I don’t know how to write it. Like most programmers, I can follow a tutorial and get by doing some simple stuff, but that’s not what I want to do. I want to understand!

So where do I start? The best way to learn is definitely by having fun, so I need an interesting project to engage in. And when I really engage in something, it completely absorbs me. The hard part is finding that interesting project. It can’t be too small, it can’t be too big, and it most definitely has to be meaningful. Having a clear purpose makes it easier.

I’ve spent a lot of time with OpenGL ES the last year, so would think that learning node.js would make a good change. When I’ve gotten past the baby steps and found a suitable project, I may even look for a coding partner that wants to join me for some social coding.

We’ll see how things get along!


3 Comments

OpenGL ES 2.0 Shader Lighting Examples

Writing shader lighting algorithms in OpenGL ES 2.0 is actually pretty simple, but If you’re writing them from scratch it may take some time to get everything right. Below you can find source code examples for a vertex lighting shader and a fragment lighting shader. You can also get them from github.com/antonholmquist/opengl-es-2-0-shaders

Per-vertex lighting is the traditional way for OpenGL ES 1.1 to do lighting, and per-fragment lighting is new to OpenGL ES 2.0 made possible with shaders.

Vertex and fragment lighting comparison

First is a sphere with no lighting. Then comes the same sphere with per-vertex-lighting, and finally with per-fragment lighting. You can clearly see some artifacts in the middle image, mainly in the specular part.

Per-vertex shading above. Per-fragment shading below.

Vertex Lighting Shader Example

Light is calculated once per vertex and sent to fragment shader.

Vertex Shader:

struct DirectionalLight {
    vec3 direction;
    vec3 halfplane;
    vec4 ambientColor;
    vec4 diffuseColor;
    vec4 specularColor;
};

struct Material {
    vec4 ambientFactor;
    vec4 diffuseFactor;
    vec4 specularFactor;
    float shininess;
};

// Light
uniform DirectionalLight u_directionalLight;

// Material
uniform Material u_material;

// Matrices
uniform mat4 u_mvMatrix;
uniform mat4 u_mvpMatrix;

// Attributes
attribute vec4 a_position;
attribute vec3 a_normal;

// Varyings
varying vec4 v_light;

void main() {

    // Define position and normal in model coordinates
    vec4 mcPosition = a_position;
    vec3 mcNormal = a_normal;

    // Calculate and normalize eye space normal
    vec3 ecNormal = vec3(u_mvMatrix * vec4(mcNormal, 0.0));
    ecNormal = ecNormal / length(ecNormal);

    // Do light calculations
    float ecNormalDotLightDirection = max(0.0, dot(ecNormal, u_directionalLight.direction));
    float ecNormalDotLightHalfplane = max(0.0, dot(ecNormal, u_directionalLight.halfplane));

    // Ambient light
    vec4 ambientLight = u_directionalLight.ambientColor * u_material.ambientFactor;

    // Diffuse light
    vec4 diffuseLight = ecNormalDotLightDirection * u_directionalLight.diffuseColor * u_material.diffuseFactor;

    // Specular light
    vec4 specularLight = vec4(0.0);
    if (ecNormalDotLightHalfplane > 0.0) {
        specularLight = pow(ecNormalDotLightHalfplane, u_material.shininess) * u_directionalLight.specularColor * u_material.specularFactor;
    } 

    v_light = ambientLight + diffuseLight + specularLight;
    gl_Position = u_mvpMatrix * mcPosition;
}

Fragment Shader:

precision highp float;

varying vec4 v_light;

void main() {
    gl_FragColor = v_light;
}

Fragment Lighting Shader Example

Light is calculated in the fragment shader for each pixel.

Vertex Shader:

precision highp float;

// Matrices
uniform mat4 u_mvMatrix;
uniform mat4 u_mvpMatrix;

// Attributes
attribute vec4 a_position;
attribute vec3 a_normal;

// Varyings
varying vec3 v_ecNormal;

void main() {

    // Define position and normal in model coordinates
    vec4 mcPosition = a_position;
    vec3 mcNormal = a_normal;

    // Calculate and normalize eye space normal
    vec3 ecNormal = vec3(u_mvMatrix * vec4(mcNormal, 0.0));
    ecNormal = ecNormal / length(ecNormal);
    v_ecNormal = ecNormal;

    gl_Position = u_mvpMatrix * mcPosition;
}

Fragment Shader:

precision highp float;

struct DirectionalLight {
    vec3 direction;
    vec3 halfplane;
    vec4 ambientColor;
    vec4 diffuseColor;
    vec4 specularColor;
};

struct Material {
    vec4 ambientFactor;
    vec4 diffuseFactor;
    vec4 specularFactor;
    float shininess;
};

// Light
uniform DirectionalLight u_directionalLight;

// Material
uniform Material u_material;

varying vec3 v_ecNormal;

void main() { 

    // Normalize v_ecNormal
    vec3 ecNormal = v_ecNormal / length(v_ecNormal);

    float ecNormalDotLightDirection = max(0.0, dot(ecNormal, u_directionalLight.direction));
    float ecNormalDotLightHalfplane = max(0.0, dot(ecNormal, u_directionalLight.halfplane));

    // Calculate ambient light
    vec4 ambientLight = u_directionalLight.ambientColor * u_material.ambientFactor;

    // Calculate diffuse light
    vec4 diffuseLight = ecNormalDotLightDirection * u_directionalLight.diffuseColor * u_material.diffuseFactor;

    // Calculate specular light
    vec4 specularLight = vec4(0.0);
    if (ecNormalDotLightHalfplane > 0.0) {
        specularLight = pow(ecNormalDotLightHalfplane, u_material.shininess) * u_directionalLight.specularColor * u_material.specularFactor;
    } 

    vec4 light = ambientLight + diffuseLight + specularLight;

    gl_FragColor = light;
}


3 Comments

OpenGL ES 2.0 iOS Tutorial – Drawing a square

The previous lesson showed us how to set up the project and create a view. Now it’s time to actually draw something, and to keep it simple, why not draw a square? Drawing in OpenGL ES 2.0 may appear a little complicated, but it really isn’t once you understand how it works. So read carefully.

All drawing in OpenGL ES 2.0 is done with something called shaders, you have probably heard of them. They’re new in OpenGL ES 2.0 and once you understand what they are all about, I’m sure you’re going to love them. Shaders make drawing more powerful, flexible, and straight forward than before.

This project can be found at GitHub.

What is a shader?

A shader is an independent computer program executed on the GPU. To create one, you write the source code, compile it and finally link it. This is same procedure as when you build an iOS app, but the unusual thing about shaders in iOS is that compiling and linking is done at run time. The shader program source code consists of two separate files, the vertex shader source and the fragment shader source. Basically, the vertex shader handles geometry and the fragment shader handles color. These are compiled separately and then linked together to create a program. The shaders are written in a language called OpenGL ES 2.0 Shading Language which syntax is very similar to C, but it’s not C, it’s an independent language.

Create shader source files

First of all, we create two empty files to store the source code, “VertexShader.vsh” and “FragmentShader.fsh”. These should be added to the “Copy Bundle Resources” list in your targets “Build Phases”. Since they contain source code, you might have expected them to be added to the “Compile Sources” list, but that’s not the case! As stated above, the shader source code will be compiled at run time, so we want to access them like regular files.

Write vertex shader source

The vertex shader handles geometry. Open “VertexShader.vsh” and add the source code below. That’s all we need.

precision mediump float;

attribute vec4 a_position; 

void main() {
    gl_Position = a_position;
}

1. First we need to set the float precision. This just defines the default precision of all floats in the file.
2. Create an attribute named a_position. Simply put, an attribute is a way for us to input data to the shader, that’s enough to know for now. We use the prefix a_ to denote that it’s an attribute.
3. Create the main method. gl_Position is a built-in variable that needs to be set in the vertex shader.

Write fragment shader source

The fragment shader handles color. Open “FragmentShader.fsh” and add these lines:

precision mediump float; 

void main() {
    gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
}

1. Set default float precision.
2. The purpose of the vertex shader is to set the built-in variable gl_FragColor. Here, we’re setting a nice green color by using the built-in method vec4 that takes RGBA as arguments.

Compile vertex shader

So now when we have written the shader source code, we need to compile it. We’re starting by reading the source code into an NSString using standard Cocoa methods. Since OpenGL wants the source code as a plain CString, we need to convert it.

NSString *vertexShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"VertexShader" ofType:@"vsh"] encoding:NSUTF8StringEncoding error:nil];
const char *vertexShaderSourceCString = [vertexShaderSource cStringUsingEncoding:NSUTF8StringEncoding];

Creating the vertex shader and compiling it is quite straight forward and self explaining. We’re just giving it the source and telling it to compile.

GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, 1, &vertexShaderSourceCString, NULL);
glCompileShader(vertexShader);

Compile fragment shader

The procedure for compiling the fragment shader is almost identical.

NSString *fragmentShaderSource = [NSString stringWithContentsOfFile:[[NSBundle mainBundle] pathForResource:@"FragmentShader" ofType:@"fsh"] encoding:NSUTF8StringEncoding error:nil];
const char *fragmentShaderSourceCString = [fragmentShaderSource cStringUsingEncoding:NSUTF8StringEncoding];

Creating and compile the fragment shader.

GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, 1, &fragmentShaderSourceCString, NULL);
glCompileShader(fragmentShader);

Create the program

When we have compiled the vertex and fragment shader, it’s time to link them together to create a program.

GLuint program = glCreateProgram();
glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);
glLinkProgram(program);

Finally, we need to define that this is the program that we want to use for drawing.

glUseProgram(program);

Define the geometry we want to draw

To do this, we have to define the corners of the square. Let’s just create a unit square centered around origo.

GLfloat square[] = {
        -0.5, -0.5,
        0.5, -0.5,
        -0.5, 0.5,
        0.5, 0.5};

We would now like to send the square geometry data to the shader. So how do we do that?

const char *aPositionCString = [@"a_position" cStringUsingEncoding:NSUTF8StringEncoding];
GLuint aPosition = glGetAttribLocation(program, aPositionCString);

First of all, we need to get a reference to the a_position variable in the vertex shader. This is done with glGetAttribLocation.

glVertexAttribPointer(aPosition, 2, GL_FLOAT, GL_FALSE, 0, square);
glEnableVertexAttribArray(aPosition);

glVertexAttribPointer defines the input and glEnableVertexAttribArray states that the data we’re inputting is dynamic for each vertex.

Draw

Now evertything is setup for us to draw. This is done with glDrawArrays.

glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

The fourth argument defines how many vertices we want to draw. We’re setting it to four since, one for each vertex in the square.


1 Comment

OpenGL ES 2.0 iOS Tutorial – Getting Started

So you’re here to learn OpenGL ES 2.0. Great, it’s indeed a lot of fun! Many people may say that learning OpenGL is difficult, but all things in life are difficult before we know them, right? All you need is some patience. I know there are quite a few tutorials on this topic out there already, but I’m sure there is room for another. I remember being new to OpenGL myself, and I know how complex it can appear in the beginning. Have faith, though! It’s extremely rewarding once you’re starting to grasp it.

OpenGL ES 2.0 is a broad subject, so where do we start? I’d say we start from the very beginning.

In case you want to jump straight into the action, this project can be downloaded from GitHub.

1. Create an Xcode project

You already know how to create a new project, don’t you? Then go to your target and select “Build Phases”. Expand the “Link Binary with Libraries” list and add “OpenGLES.framework” and “QuartzCore.framework”. Your target should now look something like this:

Next, we make these frameworks available to all files by adding them to our prefix file.

So let’s create a view controller. I prefer a simple one, all below code will be written in the viewDidLoad method, so that’s really the only method we need here.

Finally, we need somewhere to display our OpenGL content. For that we need a custom subclass of a UIView, let’s name it GLView. It needs only one method for this tutorial, layerClass, and that should return the CAEAGLLayer class.

2. Create a context

Now, go to the viewDidLoad method of the view controller. All code from now on goes in that method. We start out by creating a context. The context is what keeps everything together. Without it, we can’t do anything, so we’d better create it. Luckily, it’s easy!

EAGLContext *context = [[[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2] autorelease];
[EAGLContext setCurrentContext:context];

The parameter kEAGLRenderingAPIOpenGLES2 states that we are targeting version 2.0 of OpenGL ES. This is supported on iPhone 3GS and up and unless you really, really want to target older devices, I would advice you to forget about kEAGLRenderingAPIOpenGLES1. OpenGL ES 2.0 is the future. And a lot more fun!

3. Create a view

The view is of course also really important, since we need somewhere to draw. Since it’s a subclass of UIView, we can just create it like usual.

GLView *glView = [[[GLView alloc] initWithFrame:CGRectMake(0, 0, 320, 320)] autorelease];
[self.view addSubview:glView];

4. Create a renderbuffer

Unfortunately, OpenGL can’t draw directly to our GLView, but it can draw to something called a renderbuffer. That’s where the actual pixel data is stored. So what we do is that we draw to the renderbuffer, and then later we transfer our drawing to the glView. Doesn’t sound that complicated, right? Creating the renderbuffer is a three step process.

GLuint renderbuffer;
glGenRenderbuffers(1, &renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, renderbuffer);

1. Create an unsigned integer.
2. Let OpenGL generate a renderbuffer for us.
3. Bind the renderbuffer to GL_RENDERBUFFER. Binding is like saying to OpenGL “Hey! From now on, whenever I say GL_RENDERBUFFER, I really mean the renderbuffer (unsigned integer) that I created earlier”. In other words, GL_RENDERBUFFER becomes an alias of ‘renderbuffer’.

[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer*)glView.layer];

Finally, let OpenGL create storage for the pixels that we’re going to draw to. The context can help us with that. By passing the glView.layer, the context calculates how many pixels we need and allocates memory for them. The other thing we’re passing is GL_RENDERBUFFER, which now means ‘renderbuffer’ as we just described.

5. Create a framebuffer

You may be a bit confused now. Why the heck do we need a framebuffer when we already have a renderbuffer. Aren’t those the same thing? No, not really. The framebuffer contains the renderbuffer. It can also contain other things, like a depth buffer that stores depth values for each pixel, but we don’t need that now. In this example, the only thing the framebuffer contains is pixel color information. Creating the framebuffer is a three step process, very similar to creating the renderbuffer.

GLuint framebuffer;
glGenFramebuffers(1, &framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderbuffer);

1. Create an unsigned integer.
2. Let OpenGL generate a framebuffer for us.
3. Bind it. When we have bound it GL_FRAMEBUFFER now means ‘framebuffer’. The last thing we need to do is to connect the framebuffer and the renderbuffer. This is done by glFramebufferRenderbuffer. GL_COLOR_ATTACHMENT0 defines that the renderbuffer we’re passing holds color information (pixels), not depth information. Since the framebuffer also can contain a depth buffer, we need to be clear on this.

6. Draw!

Now everything is set up for us to draw! The easiest way for us to output something to the renderbuffer is to define a clear color, and then clearing. This is easy.

glClearColor(1, 0, 0, 1);
glClear(GL_COLOR_BUFFER_BIT);

1. glClearColor defined the color we want to with four parameters, RGBA. We like red so we’re setting a red color with full alpha, but you can modify that to the color of your choice. Colors are defined from 0 to 1 instead of 0 to 255.
2. glClear “draws” our red color to the renderbuffer.

7. Transfer the pixels to our view

We have now drawn some nice red pixels to the renderbuffer. All we need now is to transfer those pixels to our view. This is done by presentRenderbuffer. And remember, GL_RENDERBUFFER, points to our renderbuffer.

[context presentRenderbuffer:GL_RENDERBUFFER];

You should now see a red 320x320px square in your view controller. It may not be the most exciting thing you’ve ever seen, but it’s a great start!

8. That’s all, folks!

That wasn’t so hard, was it? The next tutorial is about shaders. Drawing a Square.


2 Comments

Why you really shouldn’t create repeating NSTimers in init or viewDidLoad!

During my work as an iOS developer, I’ve noticed that some people like to create repeating NSTimers in init/viewDidLoad and later attempting to invalidate them in dealloc/viewDidUnload. This is not a god habit, since it almost certainly introduces a bug that may not be apparent at first, but may cause either a memory leak or application to crash when a memory warning is received. I will explain why below.

The problem originates that the developer doesn’t realize that NSTimers retain their target. This is a hard fact that may not be apparent to novice developers, but it’s really important to know what effects that has on memory management. If the timer is set to repeat, it will retain it’s target (in this case a subclass of UIViewController) until invalidated. That means that even if the view controller is popped from the navigation hierarchy, the view controller won’t be deallocated since it’s still retained by the timer, and that’s not what you want, right?

If you created the timer in init, this will cause a memory leak. The intention may have been to invalidate the timer when the view controller is deallocated, but this will never happen as long as it’s retained by the timer (which is forever if the timer is repeating). So there’s your memory leak.

On the other hand, if you created the NSTimer in viewDidLoad you expected it to be released in viewDidUnload. This is correct, but it has some serious implications. First of all, viewDidUnload isn’t called when the view controller is deallocated (I could write another post about that) which may cause the same kind memory leak as above. Even if you manually call viewDidUnload in dealloc (which I like to do to avoid redundant code), you still have the problem that when the view controller is popped, the view controller won’t be deallocated since it’s (guess what?) retained by the timer. When you get a memory warning later on, viewDidLoad will be called, and the timer invalidated. The big problem here is that when the timer is invalidated, the view controller will be as well. The view controller is dead, and all subsequent attempts to access it’s instance variables, like setting the timer to nil, will result in an bad access that will cause the application will crash.

There you have it! So what’s the solution?

I like to create repeating timers in viewDidAppear and invalidate them and setting them to nil in viewWillDisappear. Since viewWillDisappear will always be called before you receive a memory warning that will cause the view controller to unload it’s view, you will never come to a stage where the timer is the sole retainer of the view controller, which is kind of the root of the problem.


0 Comments

Welcome!

Welcome to my first blog! I’ve never really had a blog before, but I thought it would be about time to share some thoughts with the world. To start it off, a I spent some time looking for a decent WordPress theme for the blog, and I kind of like this one. I will still need some time to tweak it to my likings though. I guess that will be an ongoing process. Now, let’s get started!


0 Comments

About Me

I am developer based in Stockholm. For as long as I can remember I have been fascinated by code. My best lines of code are written at night.
Co-founder of Monterosa. Learning node.js at invoise.com.