USER FORUM

*(you are viewing a thread; or go back to
list of threads)*

**Solvespace Projection Matrices**

*(by William D. Jones)*

Hello, I'm attempting to add a Three.js export to Solvespace, along with a default renderer/viewer, as per https://github.com/whitequark/solvespace/issues/33

I had a proof of concept working/demo'ed, but it turned out that the lighting behavior in my renderer was incorrect (the lighting did not move with the current view).

I decided to rewrite the renderer to mimic Solvespace's behavior, but I've been having trouble understanding exactly how Solvespace renders to the screen. Specifically, when I look at the matrix stack defined at line 471 in draw.cpp, I'm not sure how the camera is magically placed in front of all objects so no clipping occurs and/or the whole object is magically moved in front of the camera (Dealing with orthographic projection only).

Here's how I understand how Solvespace renders so far:

Solvespace uses a custom projection function. It keeps track of two vectors, called projRight and projUp, which determine a plane where the camera looks down. Solvespace also keeps track of an offset, which translates the camera throughout the plane defined by projUp cross projRight.

Before rendering, Solvespace will translate the model in terms of the basis formed by projRight and projUp. Solvespace will then rotate the camera; what Solvespace calls a rotation is a change in basis to convert coordinates in terms of projRight and projUp into coordinates in the standard <1,0,0>, <0,1,0>, <0,0,1> basis. The model is never moved (relative to the standard basis) throughout this whole process.

Since the lighting always remains in constant positions relative to the current screen display, the light coordinates also have to undergo this transformation (they didn't in my previous version, which is why it was broken).

Do I understand the above correctly?

It's very easy to make a model where even in the starting view, where the camera looks down the standard -z-axis, the model extends through the camera (the tutorial bracket is a good example). If I rotate the camera slightly, I would expect the model to clip through the camera based on the projection matrices defined in draw.cpp. However, no clipping occurs, and the model always consistently appears behind the camera, regardless of the z-offset (which can be changed by rotating, then panning, or zooming)!

What behavior am I failing to take into account? I would like to create a simple SolvespaceControls three.js viewer, but I am unfortunately stuck trying to correctly emulate clipping correction.

I had a proof of concept working/demo'ed, but it turned out that the lighting behavior in my renderer was incorrect (the lighting did not move with the current view).

I decided to rewrite the renderer to mimic Solvespace's behavior, but I've been having trouble understanding exactly how Solvespace renders to the screen. Specifically, when I look at the matrix stack defined at line 471 in draw.cpp, I'm not sure how the camera is magically placed in front of all objects so no clipping occurs and/or the whole object is magically moved in front of the camera (Dealing with orthographic projection only).

Here's how I understand how Solvespace renders so far:

Solvespace uses a custom projection function. It keeps track of two vectors, called projRight and projUp, which determine a plane where the camera looks down. Solvespace also keeps track of an offset, which translates the camera throughout the plane defined by projUp cross projRight.

Before rendering, Solvespace will translate the model in terms of the basis formed by projRight and projUp. Solvespace will then rotate the camera; what Solvespace calls a rotation is a change in basis to convert coordinates in terms of projRight and projUp into coordinates in the standard <1,0,0>, <0,1,0>, <0,0,1> basis. The model is never moved (relative to the standard basis) throughout this whole process.

Since the lighting always remains in constant positions relative to the current screen display, the light coordinates also have to undergo this transformation (they didn't in my previous version, which is why it was broken).

Do I understand the above correctly?

It's very easy to make a model where even in the starting view, where the camera looks down the standard -z-axis, the model extends through the camera (the tutorial bracket is a good example). If I rotate the camera slightly, I would expect the model to clip through the camera based on the projection matrices defined in draw.cpp. However, no clipping occurs, and the model always consistently appears behind the camera, regardless of the z-offset (which can be changed by rotating, then panning, or zooming)!

What behavior am I failing to take into account? I would like to create a simple SolvespaceControls three.js viewer, but I am unfortunately stuck trying to correctly emulate clipping correction.

**(no subject)**

*(by William D. Jones)*

I at least figured out a partial answer to my question:

Moving the camera down the positive z-axis to a large value (much greater than the object's size), and setting the far plane a few orders of magnitude larger than the camera's position accomplishes the same effect as what Solvespace does

I'm just keeping track of the camera's up and lookAt vectors manually, before changing the light's basis to match that of the camera's.

Moving the camera down the positive z-axis to a large value (much greater than the object's size), and setting the far plane a few orders of magnitude larger than the camera's position accomplishes the same effect as what Solvespace does

I'm just keeping track of the camera's up and lookAt vectors manually, before changing the light's basis to match that of the camera's.

**(no subject)**

*(by Jonathan Westhues)*

I'd suggest that you review my tubing joint calculator, at

http://cq.cx/tubejoin.pl

http://cq.cx/js/tubejoin.js

The matrix gets set up in renderGl().

An orthographic projection can't be described in terms of a camera position, since the camera is infinitely far away. You can fudge it by making that distance finite but big (as you note), but that's an invitation to numerical problems. My suggestion would be to set up the matrix directly, as in SolveSpace or that calculator.

http://cq.cx/tubejoin.pl

http://cq.cx/js/tubejoin.js

The matrix gets set up in renderGl().

An orthographic projection can't be described in terms of a camera position, since the camera is infinitely far away. You can fudge it by making that distance finite but big (as you note), but that's an invitation to numerical problems. My suggestion would be to set up the matrix directly, as in SolveSpace or that calculator.

**(no subject)**

*(by William D. Jones)*

>An orthographic projection can't be described in terms of a >camera position, since the camera is infinitely far away.

I understand that much, which is why the projection matrix is scaled during a zoom, since otherwise nothing would happen.

>You can fudge it by making that distance finite but big (as >you note), but that's an invitation to numerical problems.

Okay, you've confirmed what I thought was happening in the Solvespace code. But what OpenGL 1.x features are you using to disable polygon culling/clipping and cause all polygons to be rendered without faking a large z-offset (If I missed it, I apologize)? A quick glance at your helpful tubing calculator suggests that the "scene.frustumCulled = false;" is the feature I'm looking for in three.js.

From what I remember, disabling culling/clipping using a perspective matrix will not cause polygons to render correctly as if all the polygons were in fact in front of the camera.

Is this in fact true for an orthographic projection (i.e. disabling clipping, even if some triangles are closer than the near plane, will cause the correct scene to be rendered, as if the object was given a fake z-bias)?

I understand that much, which is why the projection matrix is scaled during a zoom, since otherwise nothing would happen.

>You can fudge it by making that distance finite but big (as >you note), but that's an invitation to numerical problems.

Okay, you've confirmed what I thought was happening in the Solvespace code. But what OpenGL 1.x features are you using to disable polygon culling/clipping and cause all polygons to be rendered without faking a large z-offset (If I missed it, I apologize)? A quick glance at your helpful tubing calculator suggests that the "scene.frustumCulled = false;" is the feature I'm looking for in three.js.

From what I remember, disabling culling/clipping using a perspective matrix will not cause polygons to render correctly as if all the polygons were in fact in front of the camera.

Is this in fact true for an orthographic projection (i.e. disabling clipping, even if some triangles are closer than the near plane, will cause the correct scene to be rendered, as if the object was given a fake z-bias)?

**(no subject)**

*(by Jonathan Westhues)*

The near and far clipping planes have two purposes:

(a) Clipping stuff.

(b) Determining an offset and scale that lets the screen-space

z coordinate make good use of the depth buffer's limited resolution.

In an orthographic projection, there's never a reason (beyond speed etc.) to clip anything. So (a) doesn't matter, but the scene can still render wrong due to (b). I haven't looked closely, but I'm pretty sure the frustumCulled thing is dead code, not relevant here.

The utility functions for setting up your transformation matrix will generally do (b) for you; but since I'm composing that matrix by hand, I do that z coordinate by hand too. My suggestion would be to forget about the planes, and just follow that transformation from the vertex to a reasonable depth buffer entry.

(a) Clipping stuff.

(b) Determining an offset and scale that lets the screen-space

z coordinate make good use of the depth buffer's limited resolution.

In an orthographic projection, there's never a reason (beyond speed etc.) to clip anything. So (a) doesn't matter, but the scene can still render wrong due to (b). I haven't looked closely, but I'm pretty sure the frustumCulled thing is dead code, not relevant here.

The utility functions for setting up your transformation matrix will generally do (b) for you; but since I'm composing that matrix by hand, I do that z coordinate by hand too. My suggestion would be to forget about the planes, and just follow that transformation from the vertex to a reasonable depth buffer entry.

**Post a reply to this comment:**