3D graphics and controller handling

The title of this should be extremely self explanatory, but still more info after the jump.

This time I’m taking a slightly different approach than in the other articles as this one is written around the code, rather than just wrapping together some functions from the news. What we will be discussing is a mixed tackle on 3D graphics and controller input, where input influences a 3D mdel.

First things first, I took some time to model a 3D controller of the XBox One with separate joints in order to make some of its parts move freely from its rigid root node. This is because the demo allows you to enter an input test mode where you can see on screen what’s happening more or less in your hands. Bear in mind I’m no modeler so the final result is more like a practical demonstration rather than a modeling tutorial (i.e. expect glaring errors on my geometry).

Controller model + wireframe on display
Controller model with wireframe visible

Let’s start by analyzing some 3D concepts. The source code introduces a custom format I use on Squeeze Bomb, which is based on CAPCOM’s TM2, a simplified TMD revision without all the bloat and natively handled with inline GTE code. In order to export the model I used a COLLADA parser that translates the mesh into TM2 data; when you deal with 3D graphics it’s important that you can count on a reliable format to store all your information, especially one that keeps hierarchy and quadrilaterals (quads are EXTREMELY important for performance and memory usage). Something like Waveform OBJ does keep a model intact but only stores geometry and material data at best, so better avoid similar limitations.

So how does TM2 work exactly? These are a few structures used to handle header and subheaders:

typedef struct tagTm2Triangles
{
	u32 vertex_offset;		// offset to vertex data
	u32 vertex_count;		// vertex count
	u32 normal_offset;		// offset to normal data
	u32 normal_count;		// normal count
	u32 tri_offset;			// offset to triangle data
	u32 tri_count;			// triangle count
	u32 tri_map_offset;		// Offset to triangle texture data
} TM2_TRIANGLES;

typedef struct tagTm2Quads
{
	u32 vertex_offset;		// offset to vertex data
	u32 vertex_count;		// vertex count
	u32 normal_offset;		// offset to normal data
	u32 normal_count;		// normal count
	u32 quad_offset;		// Offset to quad index data
	u32 quad_count;			// quad count
	u32 quad_map_offset;	// offset to quad texture data
} TM2_QUADS;

typedef struct tagTm2Object
{
	TM2_TRIANGLES tri;
	TM2_QUADS quad;
} TM2_OBJECT;

typedef struct tagTm2Header
{
	u32 length;				// section length in bytes
	u32 map_flg;			// 1=pointers have been updated to RAM addresses
	int obj_cnt;			// number of objects in model
	TM2_OBJECT obj[1];
} TM2_HEADER;

It’s very similar to what TMD has to offer, but tries to keep the code as simple as possible and ties functionality to your programming choices, rather than fetching a million variations of the same primitive type with slight variations (i.e. it’s faster and smaller).

Let’s associate that with fast rendering tricks. The rule of thumb is to preallocate as many primitives as possible for the task. My TM2 handler does that and uses an increasing pointer to pack all the necessary primitives for double buffering, it also copies into the primitives some data from the model itself, like uv coordinates, clut, and tpage. The reason for this it to provide minimal code when the console is transforming 3D data, so that the only thing that needs to be written to primitives are coordinates and RGB values for lighting. You can see how that works in detail inside gte\render.c.

One explanation about the rendering code itself, since the rest comes with comment, is about the chrome effect applied to plastic sections of the controller: it works by using normal vectors to obtain distorted UV coordinates. What I do from there is drop a few bits in the transformed result and apply them to the UV map with a 64 ± 64 range. The result is a texture that stretches through the whole model depending on the way it’s facing the camera. You can experiment a bit to try and use different textures and ranges for various chrome effects.

The demo also includes some controller code for handling input (also comes with a couple handy headers from Sony). What you need to know about controllers, other than the code to make them recognized by the console, is that some peripherals tend to miss reads when you test a controller too frequently. What you want to do to avoid that is to place a call to a pad reader at the top of your logic loop so that it gathers input once every frame. Something like this will do:

	// main loop
	while(1)
	{
		// cache pad reads
		Read_pad();
		// reset gs handlers and packet
		GsBeginDraw();
		// logic
		Demo();
		// draw and swap
		GsEndDraw();
	}

What the demo does with controllers is quite basic: it manages input in the main interface (an XBox One dashboard clone, yay) and it’s also used to simulate a fancy pad test, where the 3D controller responds to your input and moves or highlights part of the controller itself. The Read_pad() function also contains some useful code on how you can handle repeated pressure of the same button in a row: pad_raw is a straight read from controller buffers, while pad_raw_t removes the bloat until a button is released and pressed again. There should be a few more functions about vibration, but let’s keep that for some other day.

In the source you can also find an updated font handler and the dashboard icon displayer, both containing some useful trick with alpha blending and sprites. Basically what it does is: draw two sprites in a row, one in subtractive mode and the other in additive mode. This will cause the first sprite black out the background, while the second one will brighten it up. The effect is a nice antliasing simulation that can be applied for many purposes on interfaces.

Download
Download source
(Visited 39 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *