Conversation
|
🖼️ Screenshot tests have failed. The purpose of these tests is to ensure that changes introduced in this PR don't break visual features. They are visual unit tests. 📄 Where to find the report:
✅ If you did mean to change things: ✨ If you are creating entirely new tests: Note; it is very important that the committed reference images are created on the build pipeline, locally created images are not reliable. Similarly tests will fail locally but you can look at the report to check they are "visually similar". See https://github.com/jMonkeyEngine/jmonkeyengine/blob/master/jme3-screenshot-tests/README.md for more information Contact @richardTingle (aka richtea) for guidance if required |
|
wow you've done a ton of work! I don't think we should have the goal to drop opengl entirely, because it is still the most widely supported api, even on platforms where vulkan is not available yet (ie. web with webgl). But i think we can decide to drop everything that is below GL ES 3 and GL ~3.2. |
|
Thanks, your response is very encouraging! I will take that as support to continue working on this project.
That is the position I expected, and I agree. I ask only because not supporting OpenGL would probably have made things a lot easier. 😉
That's a good idea! I hadn't thought about doing it that way. I don't know for certain about the CPU-side stuff just yet, but it will help tremendously with having shaders work with either platform. This is definitely worth looking into more. I'm planning on tackling Material first, as it looks to be the most difficult. I will have more details to provide then. |
…more miscellaneous changes
…more miscellaneous changes
…ss; fixed broken aspects of SkinningControl and GlMesh
…n the cpu side. They are no longer ordered by the gpu layout, which greatly improves usability and compatibility with other pipelines.
…emory mappings better; other housecleaning changes
…ffers from eager to lazy initialization
|
I've made progress for JME4, but not as much as I'd hoped. Most of the time was spent fleshing out or redesigning stuff I'd already built. I want the engine to not get in your way and not be confusing, and I'm actively looking at ways to improve it further. This is a general overview of the changes I've made so far. Please give me feedback on what I should change. Here's an example of the "hello world" of jme: Engine engine = new SimpleVulkanEngine();
//Engine engine = new OpenGLEngine();
Geometry g = new Geometry("geom_jme4", new Box(1f, 1f, 1f));
Material mat = engine.createMaterial("Common/MatDefs/Misc/Unshaded.j3md");
try (StructMapping<UnshadedParams> m = mat.mapStruct("Parameters")) {
UnshadedParams p = m.get();
p.color.set(ColorRGBA.Blue);
p.glowColor.set(ColorRGBA.Blue.mult(0.2f));
p.vertexColor.set(true);
}
g.setMaterial(mat);
rootNode.attachChild(g);The struct Viewport area is handled by ViewPort instead of Camera. Idk why it was in Camera in the first place, but I think it's more at home in ViewPort. Render queues are handled by ViewPort instead of RenderManager. You can change what queues are present and how they behave per ViewPort. viewPort.addGeometryBucket(new GeometryBucket(new OpaqueComparator()) {
@Override
public void setupRender(ViewPort vp, StandardRenderSettings settings) {
// getViewPort() returns a ViewPortArea, not a ViewPort!
settings.pushViewPort(settings.getViewPort().clone().toMaxDepth());
}
@Override
public void cleanupRender(ViewPort vp, StandardRenderSettings settings) {
settings.popViewPort(); // undo the most recent push
}
});Different camera modes have been moved to subclasses: PerspectiveCamera for perspective mode, ParallelCamera for parallel projection, GuiCamera for gui rendering, etc. For engine internals, you no longer need to make the explicit distinction between gui and regular rendering for culling since that is handled by the camera implementations themselves. If you need to change the mode of a camera, encapsulate that camera inside the camera mode you need. Camera camToChange = ...
Camera newCamMode = new ParallelCamera(camToChange);
The primary way to interact with structured native memory now is through Struct, and Struct has undergone some fairly major changes. Fields are to be manually registered in the constructor rather than automatically through reflection (to improve performance). public class Transforms extends Struct {
public final Field<Matrix4f> worldViewProjectionMatrix = new Field<>(new Matrix4f());
public final Field<Matrix4f> viewProjectionMatrix = new Field<>(new Matrix4f());
public Transforms() {
addFields(worldViewProjectionMatrix, viewProjectionMatrix);
}
}If you have a buffer you'd like to interact with using MappableBuffer buffer = ...
try (StructMapping<Transforms> m = buffer.mapAllStructs(new Transforms().bind(StructLayout.std140))) {
Transforms t = m.get();
m.sample(0); // bind t to offset 0 in the buffer
t.worldViewProjectionMatrix.set(Matrix4f.IDENTITY);
m.increment(); // bind t to the current offset plus sizeof(t)
t.worldViewProjectionMatrix.set(Matrix4f.IDENTITY);
// binds t to each multiple of sizeof(t) from 0 to the buffer's end in order.
// preserves the current offset
for (int i : m) {
t.worldViewProjectionMatrix.set(t.worldViewProjectionMatrix.get());
}
}Struct does not store any data itself; it only determines where to read and write from the buffer, so the StructMapping is able to move the same Struct instance around to interact with multiple memory locations. It helps avoid having to recalculate the struct's layout each time and we don't create a lot of garbage. Meshes use string names instead of enums to identify vertex attributes. I also thought it'd be neat to use structs to determine how vertex buffers are laid out and to interact with them. Structs for vertex buffers are forced to use public static class VertexData extends Struct<VertexAttr> {
public final VertexAttr<Vector3f> position = new VertexAttr<>("Position", new Vector3f());
public final VertexAttr<Vector2f> texCoord = new VertexAttr<>("TexCoord", new Vector2f());
public VertexData() { addFields(position, texCoord; }
}
// name "AdaptiveMesh" is to be changed
Mesh mesh = new AdaptiveMesh(4, 1); // vertices, instances
VertexBuffer data = new VertexBuffer(InputRate.Vertex, new VertexData(), JmePlatform.allocateStandardBuffer(1, BufferUsage.Vertex, UpdateHint.Static));
try (StructMapping<VertexData> m = data.map()) {
VertexData v = m.get();
for (int i : m) {
v.position.alias().set(0f, i, i * i); // alias acts as a temporary Vector3f, but is attached to the struct field
v.position.set(); // set from alias
v.texCoord.alias().set(i, i * i);
v.texCoord.set();
}
}
mesh.addVertexBuffer(data);If you need to interact with certain attributes by name but don't know in which buffer or by which struct they are stored: try (AttributeMapping m = mesh.mapAttributes(InputRate.Vertex, "Position", "TexCoord")) {
StructField<Vector3f> pos = m.poll(); // fetches the first attribute named
StructField<Vector2f> tex = m.poll(); // fetches the second attribute named
m.sample(0);
pos.set(Vector3f.ZERO);
tex.set(Vector2f.ZERO);
}Anytime a struct writes to a buffer, it also registers the written area as needing to be updated in some fashion (i.e. uploaded to the gpu). Interacting with MappableBuffer directly is similar to that of structs. A key difference to watch out for is that you'll have to manually register the buffer areas to update yourself. MappableBuffer buffer = ...
try (BufferMapping m = buffer.map()) {
m.getFloats().position(24).put(5f);
m.stage(24 * Float.Bytes, Float.Bytes); // register the changed region for update
}The try-with-resources pattern for mapping buffers could possibly be removed. I don't know if it's worth the effort of ripping all that infrastructure out or how performant it would be. I'm still working out exactly how applications should go about creating buffers. Vulkan requires a lot of extra stuff with managing buffers and the different ways they can behave, while OpenGL barely cares. I'm thinking about something like this: MappableBuffer b = BufferUtils.allocateGraphicsBuffer(sizeInBytes, BufferUsage.Uniform, UpdateHint.Dynamic);
To assist with type safety on backend flags and enums, I've created Flag and IntEnum interfaces. The intention was that it would enable the ease-of-use of java enums, but also allow users to use flags/enums not available by those java enums. For example: public enum BufferUsage implements Flag<BufferUsage> { ... }
public void doSomething(Flag<BufferUsage> usageFlag);
doSomething(BufferUsage.Vertex); // is accepted
doSomething(Flag.of(VK_BUFFER_USAGE_VERTEX)); // is acceptedType "safety" is done by telling the user that On the internal rendering side, I haven't landed on an overall system I'm happy enough with, but here's what I'm thinking at the moment. // lambda executed for each visible spatial
for (GeometryBucket b : vp.gatherGeometry(s -> s.runControlRender(engine, vp))) {
// apply render settings
b.setupRender(vp, settings);
settings.applySettings();
// encapsulate geometries in BucketElements, then sort the elements.
// ExampleBucketElement is not real
for (ExampleBucketElement e : b.sort(g -> new ExampleBucketElement(vp.getCamera(), g)) {
// bind element resources (i.e. pipelines, materials)
// update material parameters (lighting, etc)
// update/upload buffers
// render mesh
}
// cleanup render settings that were applied
b.cleanupRender(vp, settings);
}BucketElement encapsulates a Geometry and all information needed to properly sort that element in the bucket (camera, pipeline, material, textures, etc), so that information does not need to be stored in the geometry itself or in other less convenient places. The exact BucketElement implementation used will depend on the renderer, hence why the BucketElement type is not tied to GeometryBucket. I'm not sure how things like lighting will be handled because on one hand we may want to give shaders flexibility in how they get light data, but I also want light processing to be as efficient as possible and that will likely require a more monolithic approach to fetching and packing light data. A little bonus utility I thought would be nice to have is being able to iterate over a spatial's descendents with an enhanced for-loop. SceneGraphIterator already does this, but I decided it'd be more efficient to have it integrated into Spatial. I'm only bringing this up because I'm also toying around with accumulating inherited spatial properties outside Deque<Spatial.CullHint> cullHint = new ArrayDeque();
// cullHint is automatically popped each time the iterator goes up in the graph hierarchy
for (Spatial s : scene.iterator(cullHint)) {
if (s.getLocalCullHint() != Spatial.CullHint.Inherit) {
cullHint.push(s.getLocalCullHint());
} else if (cullHint.peek() != null) {
cullHint.push(cullHint.peek());
} else {
cullHint.push(Spatial.CullHint.Dynamic);
}
if (cullHint.peek() == Spatial.CullHint.Dynamic) {
// do frustum culling
}
}I don't have any strong opinions about this. I just thought it'd be an interesting alternative to explore. |
This is an experiment I've been working on for the past month or two. I'm opening this PR now to get feedback on whether Vulkan support should be pursued further, and if so, whether OpenGL support should be dropped. Let me know what you guys think.
The primary goal is to determine how difficult it would be to port the engine over to Vulkan. Currently, I've made a set of tools that match 1:1 with Vulkan's base elements and have managed to render a spinning textured quad with those tools running from a JME application. I'm finding it to be not as bad as expected in terms of complexity.
Going forward, I'm trying to be as faithful as possible to JME's original high-level design. That means Mesh, Material, Geometry, etc. will hopefully be publically identical for their most common functions.
Other considerations:
Because Vulkan uses structs for storing object creation settings that can often hold a lot of settings, I decided to use make use of builders and try-with-resources blocks. This way, it is obvious what methods are for initialization only, and everything is initialized and closed properly with the try-with-resources block. It also works very nicely with MemoryStack.