My question is... If we take, for example, a simple cube, I can make a model in Blender then use the Blend2Bam utility to convert it to a .bam, and import it into Panda3D... Or I can generate a cube with Panda's built-in mechanisms. Which will eat up more memory/system resources and/or have a higher negative impact on framerate as the number of objects increase?
Intuitively, I'd expect the answer to be that procedural resources are lower-impact and that each object will be less literal data usage. (A sphere I made in Blender and converted to .bam is 26mb.)
I did also find this paper:
.pdf
...Which seems to corroborate my expectation. However, this does not directly concern Panda3D, and, given Panda3D is a Python-scripted engine with C++ under the hood, I feel like there could be additional variables at play here. I don't know how efficient Panda3D's procedural tools are, and I'm hoping for some insight directly concerning the engine.
I've just recently started using Panda3D and simultaneously learning Python; I'm totally new to game development - so thank you kindly for any feedback or input on this!
*Edit - code sample of current procedural cube:
def normalized(*args):
myVec = LVector3(*args)
myVec.normalize()
return myVec
def makeSquare(x1, y1, z1, x2, y2, z2):
format = GeomVertexFormat.getV3n3t2() # vertex, normals, texcord - alternative use only vertex and texcoord: v3t2 - may not need normals
vdata = GeomVertexData('square', format, Geom.UHStatic) # change to STATIC!
vertex = GeomVertexWriter(vdata, 'vertex') # variable for... vertex writer?
normal = GeomVertexWriter(vdata, 'normal')
texcoord = GeomVertexWriter(vdata, 'texcoord')
if x1 != x2:
vertex.addData3(x1, y1, z1)
vertex.addData3(x2, y1, z1)
vertex.addData3(x2, y2, z2)
vertex.addData3(x1, y2, z2)
normal.addData3(normalized(2 * x1 - 1, 2 * y1 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y1 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y2 - 1, 2 * z2 - 1))
normal.addData3(normalized(2 * x1 - 1, 2 * y2 - 1, 2 * z2 - 1))
else:
vertex.addData3(x1, y1, z1)
vertex.addData3(x2, y2, z1)
vertex.addData3(x2, y2, z2)
vertex.addData3(x1, y1, z2)
normal.addData3(normalized(2 * x1 - 1, 2 * y1 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y2 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y2 - 1, 2 * z2 - 1))
normal.addData3(normalized(2 * x1 - 1, 2 * y1 - 1, 2 * z2 - 1))
texcoord.addData2f(0.0, 1.0)
texcoord.addData2f(0.0, 0.0)
texcoord.addData2f(1.0, 0.0)
texcoord.addData2f(1.0, 1.0)
tris = GeomTriangles(Geom.UHStatic)
tris.addVertices(0, 1, 3)
tris.addVertices(1, 2, 3)
square = Geom(vdata)
square.addPrimitive(tris)
return square
# Note: it isn't particularly efficient to make every face as a separate Geom.
# instead, it would be better to create one Geom holding all of the faces.
square0 = makeSquare(-1, -1, -1, 1, -1, 1) #### EACH SQUARE IS TWO TRIS!
square1 = makeSquare(-1, 1, -1, 1, 1, 1)
square2 = makeSquare(-1, 1, 1, 1, -1, 1)
square3 = makeSquare(-1, 1, -1, 1, -1, -1)
square4 = makeSquare(-1, -1, -1, -1, 1, 1)
square5 = makeSquare(1, -1, -1, 1, 1, 1)
snode = GeomNode('square')
snode.addGeom(square0)
snode.addGeom(square1)
snode.addGeom(square2)
snode.addGeom(square3)
snode.addGeom(square4)
snode.addGeom(square5)
For some added clarity, I don't see anything that suggests to me that model being imported via .egg or .bam is as minimal data-wise as one of these procedurals. Depending on various calculations, it's possible it could go either way. For example, support of quads versus tris only. Or, in the procedural case, whether all vertexes/faces/tris are calculated individually or as part of a block. The above code is a lightly-modified example sourced from elsewhere in the community, and it is plainly noted that this method of generation is not the most effective, memory-wise, since the cube itself is split up into a potentially unnecessary number of sub-objects.
This perfectly illustrates the heart of my question. In an optimal environment, I expect it would eat less memory and resources both client-system and server-side to employ procedural geometry opposed to imported meshes generated in a 3D program such as Blender - but there are some possibilities that indicate that may not be absolutely true; or not true in all circumstances. At the end of the day I don't know enough about the engine itself or how it employs system resources in conjunction with the rendering engine to discern what will be optimal performance-wise.
My question is... If we take, for example, a simple cube, I can make a model in Blender then use the Blend2Bam utility to convert it to a .bam, and import it into Panda3D... Or I can generate a cube with Panda's built-in mechanisms. Which will eat up more memory/system resources and/or have a higher negative impact on framerate as the number of objects increase?
Intuitively, I'd expect the answer to be that procedural resources are lower-impact and that each object will be less literal data usage. (A sphere I made in Blender and converted to .bam is 26mb.)
I did also find this paper:
https://dsg.tuwien.ac.at/~sd/papers/EDGE_2023_I_Murturi.pdf
...Which seems to corroborate my expectation. However, this does not directly concern Panda3D, and, given Panda3D is a Python-scripted engine with C++ under the hood, I feel like there could be additional variables at play here. I don't know how efficient Panda3D's procedural tools are, and I'm hoping for some insight directly concerning the engine.
I've just recently started using Panda3D and simultaneously learning Python; I'm totally new to game development - so thank you kindly for any feedback or input on this!
*Edit - code sample of current procedural cube:
def normalized(*args):
myVec = LVector3(*args)
myVec.normalize()
return myVec
def makeSquare(x1, y1, z1, x2, y2, z2):
format = GeomVertexFormat.getV3n3t2() # vertex, normals, texcord - alternative use only vertex and texcoord: v3t2 - may not need normals
vdata = GeomVertexData('square', format, Geom.UHStatic) # change to STATIC!
vertex = GeomVertexWriter(vdata, 'vertex') # variable for... vertex writer?
normal = GeomVertexWriter(vdata, 'normal')
texcoord = GeomVertexWriter(vdata, 'texcoord')
if x1 != x2:
vertex.addData3(x1, y1, z1)
vertex.addData3(x2, y1, z1)
vertex.addData3(x2, y2, z2)
vertex.addData3(x1, y2, z2)
normal.addData3(normalized(2 * x1 - 1, 2 * y1 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y1 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y2 - 1, 2 * z2 - 1))
normal.addData3(normalized(2 * x1 - 1, 2 * y2 - 1, 2 * z2 - 1))
else:
vertex.addData3(x1, y1, z1)
vertex.addData3(x2, y2, z1)
vertex.addData3(x2, y2, z2)
vertex.addData3(x1, y1, z2)
normal.addData3(normalized(2 * x1 - 1, 2 * y1 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y2 - 1, 2 * z1 - 1))
normal.addData3(normalized(2 * x2 - 1, 2 * y2 - 1, 2 * z2 - 1))
normal.addData3(normalized(2 * x1 - 1, 2 * y1 - 1, 2 * z2 - 1))
texcoord.addData2f(0.0, 1.0)
texcoord.addData2f(0.0, 0.0)
texcoord.addData2f(1.0, 0.0)
texcoord.addData2f(1.0, 1.0)
tris = GeomTriangles(Geom.UHStatic)
tris.addVertices(0, 1, 3)
tris.addVertices(1, 2, 3)
square = Geom(vdata)
square.addPrimitive(tris)
return square
# Note: it isn't particularly efficient to make every face as a separate Geom.
# instead, it would be better to create one Geom holding all of the faces.
square0 = makeSquare(-1, -1, -1, 1, -1, 1) #### EACH SQUARE IS TWO TRIS!
square1 = makeSquare(-1, 1, -1, 1, 1, 1)
square2 = makeSquare(-1, 1, 1, 1, -1, 1)
square3 = makeSquare(-1, 1, -1, 1, -1, -1)
square4 = makeSquare(-1, -1, -1, -1, 1, 1)
square5 = makeSquare(1, -1, -1, 1, 1, 1)
snode = GeomNode('square')
snode.addGeom(square0)
snode.addGeom(square1)
snode.addGeom(square2)
snode.addGeom(square3)
snode.addGeom(square4)
snode.addGeom(square5)
For some added clarity, I don't see anything that suggests to me that model being imported via .egg or .bam is as minimal data-wise as one of these procedurals. Depending on various calculations, it's possible it could go either way. For example, support of quads versus tris only. Or, in the procedural case, whether all vertexes/faces/tris are calculated individually or as part of a block. The above code is a lightly-modified example sourced from elsewhere in the community, and it is plainly noted that this method of generation is not the most effective, memory-wise, since the cube itself is split up into a potentially unnecessary number of sub-objects.
This perfectly illustrates the heart of my question. In an optimal environment, I expect it would eat less memory and resources both client-system and server-side to employ procedural geometry opposed to imported meshes generated in a 3D program such as Blender - but there are some possibilities that indicate that may not be absolutely true; or not true in all circumstances. At the end of the day I don't know enough about the engine itself or how it employs system resources in conjunction with the rendering engine to discern what will be optimal performance-wise.
It doesn't make a difference, in principle. The model loader will convert your model to the same GeomVertexData memory representation. Memory usage should not be a factor in your consideration. That said, there are some additional considerations that may be relevant in your case.
The right choice between procedural generation and loading pre-made meshes depends on specifics which you have not provided. One example that comes up often is terrain generation. For a simple terrain, it is often easier in terms of workflow to make the terrain in a modelling program and import it into Panda3D. However, very large and very complex terrain often benefits from more advanced LOD techniques in which the terrain becomes less detailed with more distance. This is harder to do with pre-generated meshes and so procedural generation, at the cost of adding much complexity to the code.
Another popular example is when creating a minecraft-esque world. A naive approach would be to load in many separate cube models, but this suffers from the geometry batches problem. Since generating a cube procedurally is easy, it generally becomes worth it to invest the time to create a procedural generator that can generate whole chunks of the world as part of the same Geom object.
For now, since you are just starting Panda3D, I would recommend keeping things simple for your workflow, and adding complexity only as needed.