Skip to content
Advertisement

Unexpected behavior from glVertexAttribPointer data types using pyglet

I’ve been working on implementing a quadtree based LOD system. I’m using Pyglet as a python binding for OpenGL and I’m getting some weird behavior, I’d like to know if I’m doing something wrong.

Setting up my buffers: (input data are leaves of a quadtree to be decoded in the vertex shader)

num_points = 4
buffer = GLuint()
glGenBuffers(1, pointer(buffer))
glBindBuffer(GL_ARRAY_BUFFER, buffer)
data = (GLuint * num_points)(*[0b111001000100, 0b111001010100, 0b111001100100, 0b111001110100])
glBufferData(GL_ARRAY_BUFFER, sizeof(data), pointer(data), GL_STATIC_DRAW)
glEnableVertexAttribArray(0)
glVertexAttribPointer(0, 1, GL_UNSIGNED_INT, GL_FALSE, 0, 0)

Draw call:

@window.event
def on_draw():
    glClear(GL_COLOR_BUFFER_BIT)

    glUseProgram(test_shader)
    glUniformMatrix4fv(glGetUniformLocation(test_shader, b'view_mat'), 1, GL_FALSE, camera.view_matrix())
    glUniformMatrix4fv(glGetUniformLocation(test_shader, b'projection_mat'), 1, GL_FALSE, camera.proj_matrix())

    glBindBuffer(GL_ARRAY_BUFFER, buffer)
    glDrawArrays(GL_POINTS, 0, num_points)
    glBindBuffer(GL_ARRAY_BUFFER, 0)

source for test_shader:

test_v = b'''
#version 430
layout (location = 0) in uint key;
uniform mat4 view_mat, projection_mat;

uint lt_undilate_2(in uint k) 
{
    k = (k | (k >> 1u)) & 0x33333333;
    k = (k | (k >> 2u)) & 0x0F0F0F0F;
    k = (k | (k >> 4u)) & 0x00FF00FF;
    k = (k | (k >> 8u)) & 0x0000FFFF;
    return (k & 0x0000FFFF);
}

void lt_decode(in uint key, out uint level, out uvec2 p)
{
    level = key & 0xF;
    p.x = lt_undilate_2((key >> 4u) & 0x05555555);
    p.y = lt_undilate_2((key >> 5u) & 0x05555555);
}

void lt_cell(in uint key, out vec2 p, out float size)
{
    uvec2 pos;
    uint level;
    
    lt_decode(key, level, pos);
    size = 1.0 / float(1u << level);
    p = pos * size;
}


void main()
{
    vec2 pos;
    float scale;
    lt_cell(key, pos, scale);
    gl_Position = projection_mat * view_mat * vec4(pos*100,0, 1);
}
'''

And this doesn’t render anything for me. After some debugging I discover that if I change

glVertexAttribPointer(0, 1, GL_UNSIGNED_INT, GL_FALSE, 0, 0)

To:

glVertexAttribPointer(0, 1, GL_FLOAT, GL_FALSE, 0, 0)

Then I get my expected render… despite the fact that my data is a uint, not a float. I’m fairly new to opengl, am I making some mistake here?

output with GL_FLOAT as dtype

Advertisement

Answer

If the type of the attribute is integral (e.g. uint) you must use glVertexAttribIPointer instead of glVertexAttribPointer. Focus on I and see glVertexAttribPointer:

glVertexAttribPointer(0, 1, GL_UNSIGNED_INT, GL_FALSE, 0, 0)

glVertexAttribIPointer(0, 1, GL_UNSIGNED_INT, 0, 0)

The type argument of glVertexAttribPointer just specifies the type of the source data. However, it is assumed that the type of the attribute is floating point and the data are converted to floating point values.

User contributions licensed under: CC BY-SA
10 People found this is helpful
Advertisement