Utah GLX - Frequently Asked Questions


Table of Contents
1. Utah GLX faq
Description
What is it?
Why use DRI instead of Utah-GLX?
Why use Utah-GLX instead of DRI?
Binary Installation Help
Where do I get Binary Packages?
What versions of XFree86 can I use?
Do I have to download the X Server code?
Can't I just use apt?
Can I get a binary tarball?
Compilation Help
What do I need to compile it?
Where do I get the Mesa source code?
But my application isn't compatible with Mesa 3.1!
What about pMesa?
Where do I get the glx driver source code?
How do I check out a cvs version of the glx source code?
Is there a more stable branch of cvs I can use?
What about the X server source code?
Where do I get the X Server source code?
How do I build the module?
My compiler dies with a 'MAP_FAILED undeclared' message
What about FreeBSD support?
Getting it Running
How do I know if I'm running with the glx module?
How large can my desktop be if I want to use the GLX module?
I'm using the glx module, but things don't seem faster.
Now that it's Running..
Does Quake work with it? How fast is it?
What do you mean? I get a black screen when I run q3demo!
q2 runs really slowly, or complains about libvoodoo
Why am I seeing forced syncs?
Sound is distorted when I'm running an OpenGL application.
Help! My X server is using most of my memory!
How do I enable DMA?
How do I enable AGP support?
Why doesn't the Riva driver support DMA/AGP?
I get a "Bad Request" error..
It crashes my X Server!
Known Issues
XRacer filled up the swap file and crashed the server!
Further Information
What platforms are supported?
Where's this mailing list you keep mentioning?
Glossary

Chapter 1. Utah GLX faq

Description

What is it?

This project is building a hardware accelerated glx module for free unix operating systems. Currently, we have support for 3D acceleration on the Matrox MGA-G200, MGA-G400, nvidia's RIVA series, ATI's Rage Pro and Intel's i810 for XFree86 3.3.x.
We are in process of updating the modules to work under xfree4. Some already work now.
We also support software rendering.

GLX is basically the glue that ties OpenGL and X together. Most of the code in Utah-GLX is actually dealing with graphics chipset specific drivers code. Most of the OpenGL support is provided by Mesa (which has hardware-acceleration hooks that we update).

The GLX protocol is a way to send 3D graphics commands over an X client-server connection. It was created by Silicon Graphics and recently released as open source. In order to distinguish this package from SGI's GLX module we opted to refer to this project as Utah GLX.

Why use DRI instead of Utah-GLX

Many of the original developers of Utah-GLX went on to work on the "Direct Rendering Infrastructure" extension to XFree86 4.0. See http://dri.sourceforge.net/

Use DRI if you are running linux, and you want optimum speed from a graphics card

Why use Utah-GLX instead of DRI

Utah-GLX has a commitment to cross-platform support. So when a card is supported by Utah-GLX, it should work on all platforms that run XFree86. In contrast, DRI only runs on linux, and certain specific variants of BSD.

Utah-GLX has an inherently cleaner design than DRI. At the code level, it is more straightforward to understand.
The most striking example of this is that Utah-GLX is a single, "normal" XFree extension module that gets loaded by xfree, that implements GLX.
Whereas for DRI support, you first have to compile the xfree86 server with DRI specific hacks, then ALSO load the DRI module, then ALSO load a kernel module.

Additionally, the DRI modules only support hardware acceleration for "direct" clients, whereas Utah-GLX only has "indirect" client support. On the one hand, this means that Utah-GLX is slower. But on the bright side, this means it is more compatible. If for some reason you wish to display a GLX client running from a remote machine, Utah-GLX will still give you hardware acceleration, whereas DRI will not, (last check, September 2002)


Binary Installation Help

Where do I get Binary Packages?

See the sourceforge Utah-GLX files repository, for tarfile for various OS's.


What versions of XFree86 can I use?

All hardware supported by Utah-GLX, is supported under XFree3.3.5

Some hardware is supported for XFree4.2.

You have to use a module specifically compiled for the version of xfree you are using.


Do I have to download the X Server code?

No. You do not need the XFree86 source code, the headers needed by glx are now self-contained.


Can't I just use apt?

There are debian packages of Utah-GLX for the older xfree 3.3.x modules.


Can I get a binary tarball?

See the sourceforge Utah-GLX files repository, for tarfile for various OS's.

Compilation Help

What do I need to compile it?

To compile the glx module, you will need the glx source code and a version of the Mesa source code.

egcs 1.0.3 is known to fail to compile some files (internal compiler errors) so you better use a more recent version (1.1+) or "good old" gcc. Please do not use pgcc unless you have verified that a stable compiler creates working code on your machine...


Where do I get the Mesa source code?

Currently, the driver works only with Mesa 3.1 and the cvs version of Mesa (3.2). It will not work with 3.1 beta 1, beta 2, or beta 3.


    cvs -d :pserver:anonymous@cvs.mesa3d.org:/cvs/mesa3d login
   

Note: Note: Just press enter for the password


   cvs -d :pserver:anonymous@cvs.mesa3d.org:/cvs/mesa3d co -r mesa_3_2_dev Mesa
   


But my application isn't compatible with Mesa 3.1!

There are known incompatibilities between some applications and Mesa 3.1, mostly relating to memory footprint of large display lists.

Our policy is to focus on the current cvs version of Mesa, and try to maintain support for the old released version. However, the code has grown quite divergent, and the decision was made in mid September 1999 to drop support for Mesa 3.0. Version 3.1 was approaching release, and we felt it was too expensive to continue developing two parallel versions.


What about pMesa?

Some work has been done to add suport for pMesa(for those with multiple CPUs), but we haven't made this a priority. The client-server separation inherent in the glx protocol already splits the rendering pipeline into two processes, meaning that pMesa won't help most applications on dual processor machines.

It will need some help to bring it up to date. Especially since the current pMesa is based on Mesa 3.1beta2


Where do I get the glx driver source code?

Primary access to the source code is through CVS (see the next question).

A FreeBSD 'port' of the old xfree3 modules is available. This package includes a source tarball. See below for more instructions.

Automatic cvs snapshots are also available from the matroxusers site and Ralph Giles' glx page, but they my not be up-to-date.


How do I check out a cvs version of the glx source code?

Make sure you have cvs installed on your computer.

Set your CVSROOT environment variable as follows:

For bash or other sh-like shells:
export CVSROOT; CVSROOT=':pserver:anonymous@cvs.utah-glx.sourceforge.net:/cvsroot/utah-glx'
For tcsh/csh:
setenv CVSROOT ':pserver:anonymous@cvs.utah-glx.sourceforge.net:/cvsroot/utah-glx

Type 'cvs login' to login (just hit enter for the password), then

cvs -z3 checkout -P glx
to get the old xfree3 code, or
cvs -z3 checkout -P glx-xf4
to get the new xfree4 stuff.

You'll probably want to run 'cvs update' every week or so, to make sure your copy of the source gets updated with any changes.


Is there a more stable branch of cvs I can use?

If you for some reason need to run a simpler version of the mga or nv drivers, we suggest you use the checkpoint branch made just before the WARP code merge in September 1999. The performance of the mga driver will be poor, but AGP is supported on the mga cards through the old gart module. There are only minor differences in the riva driver so far.

Both mga and riva card users can also use Mesa 3.0 with this version of the code. It is completely unsupported, however.

To switch to this branch, set up cvs as above, but at the checkout step, say instead:


	cvs -z3 checkout -P -r last-nowarp glx 
    

The build system is slightly different than the main branch. See README.configure for instructions.


What about the X server source code?

The glx module is an X server extension, and as such was originally written to be part of the X source tree. The code was a separate achive, but many internal X headers were required for compilation.

Since the XFree86 source code is rather large, we've included the required header files in the 'xc-headers' directory to save on download time. Only the standard X client headers are now required.


How do I build the module?

The driver can be build in the standard way of an autoconf package; see the INSTALL file for details. One hitch is that you have to provide a pointer to the Mesa source tree, but the configure script will use a symlink in the top level glx directory if it finds one. So as a quick start:


	cd glx 
        ln -s <path_to_mesa_src> ./mesa 
	./autogen.sh 
        make depend
	make 
	make install (as root) 
  

To enable the driver, you have to add a ' Load "glx.so" ' line to your the Modules section of your XF86Config file. You may also have to add/uncomment the Section "Module"/EndSection pair around it.


My compiler dies with a 'MAP_FAILED undeclared' message

We've had reports from people using the Slackware Linux distribution of mgadma.c failing to compile because MAP_FAILED is undefined. This is supposed to be in sys/mman.h as a return value of mmap(), but apparently isn't on some systems. We don't know if this is a Slackware problem or a libc5/kernel issue, but feel free to enlighten us.

Until then, we recommend pasting the following code into your /usr/include/sys/mman.h, or at the top of mgadma.c if you prefer not to muck with your header files (or don't have permissions?)


	#ifndef MAP_FAILED
        #define MAP_FAILED ((__ptr_t) -1)
	#endif 
   

What about FreeBSD support? (XFree3.3.5 only)

We'd like to have the module build "out of the box" but there are apparently some issues. Marc van Woerkom has the following suggestions for FreeBSD users:

By far the easiest method to begin, assuming you have the Mesa port (/usr/ports/graphics/Mesa3) on your disk (otherwise get it from http://www.freebsd.org), is to download the glx port from http://www.freebsd.org/~3d/distfiles/glx.

You need the *.shar.bz2 sh-archive only, unpack it with


	bunzip2 -vt .shar sh .sh 
    
then switch into the resulting top directory (glx or riva-glx) and type "make install".

The port will fetch the necessary files from your /usr/ports/distfiles or the Internet and do the rest for you.

If you have questions left, send mail to 3d@freebsd.org.


Getting it Running

How do I know if I'm running with the glx module?

There are two parts:

Server Side

When the X Server starts, you should see something like this:


	(--) no ModulePath specified using default: /usr/X11R6/lib/modules
        GLX extension module for XFree86 3.3.3 -- Mesa version 3.0
	GLX package version 0.9, GLX protocol version 1.2.
	(**) module glx.so successfully loaded from /usr/X11R6/lib/modules
   

This screen typically scrolls by pretty fast, so try something like startx 2> ~/tmp/x.out and examine the file "~/tmp/x.out". This indicates that the X Server is successfully finding and loading the glx module.

Don't worry if your output differs slightly, but if there are messages indicating a failure to load the module, please post the output to the mailing list.

GLX should also be listed among the extensions reported by

xdpyinfo

Client Side

When you run an OpenGL program from an xterm, you should see the following text:

	
	@Created GLX Context..
   

This indicates that the OpenGL program is using the correct OpenGL library.


How large can my desktop be if I want to use the GLX module?

This does not fully apply to the nVidia driver

Since the driver only accelerates double buffered visuals you'll need enough free space for your window and a 16-bit Z-buffer in off screen memory. (32 bit Z-buffer isn't supported yet)

The table below will tell you how much memory you'll have left if you are running a full screen application. That memory can be used for texture storage, and the X server has probably used some of it already for the pixmap cache.

On-board memoryResolutionBit depthTexture memory
81024x7678163.5MB
81280x1024160.5MB
8800x600323.5MB
81024x878320.5MB
161600x1200165MB
161280x1024323.5MB

Note: You need to change the resolution in your XF86Config, changing resolution on the fly with ctrl alt +/- isn't enough.

Note: The mga driver may use the primary memory for both textures (set mga_systemtexture=1) and backbuffers/depthbuffers. Using main memory for backbuffers/depthbuffers is much slower than having them on the card.


I'm using the glx module, but things don't seem faster.

Clients need to be linked with the right version of the GL library to make use of the driver. If apps are not compiled to use libGL you will need to modify the libMesaGL symlink to point to the correct lib. To do this type the following as root.


 	rm /usr/lib/libMesaGL.so.3
 	ln -s /usr/X11R6/lib/libGL.so.1.0 /usr/lib/libMesaGL.so.3
 	ldconfig
    

If for some reason you want to switch back to the mesa library


	rm /usr/lib/libMesaGL.so.3
	ln -s /usr/lib/libMesaGL.so.3.0 /usr/lib/libMesaGL.so.3
	ldconfig
    

Try resizing the window. The current crop of consumer 3d hardware only handles actually drawing of triangles to the screen; most of the viewing calculations are still done by the host processor. In some applications, this can lead to counter-intuitive effects, e.g. it doesn't run any faster, but you can triple the window's screen area and add bilinear-filtered textures without it running any slower. This is particularly true with the performance limitations of the current driver. We're working on it!

Andree Bormann has put together a page on getting various games to work with the glx module. You might also look there if you're having trouble.


Now that it's Running..

Does Quake work with it? How fast is it?

The driver does indeed work quite well with quake2 and q3. Performance is currently between 15 and 40 fps, depending on the options used.


What do you mean? I get a black screen when I run q3demo!

This sounds like a quake bug to me, but one common problem to watch out for:

bill@taniwha.org wrote:

I'm having problems getting q3test to go. All I get is a black screen with some dots in the top right corner. (ctrl-alt-bspace gets things sane again).

Ryan Drake answered:

Quake is trying to change resolution to 640x480 (or whatever) but you don't have those modes defined in Screen Section of your /etc/X11/XF86Config. Example:

Modes "1024x768"

Causes the black screen lockup when Quake tries to switch resolution. Just change that line to:

Modes "1024x768" "800x600" "640x480"

And make sure you DONT try to switch to any resolution not listed there or... boom.


q2 runs really slowly, or complains about libvoodoo

quake2 (and the original quake and quakeworld) predate the glx-based drivers and tend to assume you're using either the software Mesa renderer or Mesa with the glide drivers for the 3Dfx Voodoo series. John Carmack has promised an updated version of quake2, but until they get around to it, some hacking is involved.

Try preloading the utah-glx libGL.so. On a Linux bash shell:


LD_PRELOAD=/usr/X11R6/lib/libGL.so quake2 +set vid_ref glx +set gl_driver /usr/X11R6/lib/libGL.so.1


Why am I seeing forced syncs?

If you have some animation or other constantly-updating display going in another window, that task competes with the glx module for control of the card's drawing engine. This often forces a server sync (and can hurt performance). Taskbar cpu-meters and animated banner ads are common culprits.

You can see whether this is happening by enabling the "performance boxes" debugging option in glx.conf. Forced server syncs are marked by the appearance of a dark blue box.

As a remedy, you can try closing the offending application, or running your glx appication in a separate X session.


Sound is distorted when I'm running an OpenGL application.

This is a known problem with the driver when using Pseudo-DMA. To fix this problem you need to reserve some memory at bootup to use for DMA buffers, read on to see how to do this. For a longer discussion of the problem: http://www.alsa-project.org/misc/vgakills.txt


Help! My X server is using most of my memory!

If you're looking at the output of top or a similar utility, be aware that it reports all of the memory mapped interfaces to the video card, as well as the entire agp aperture as part of the X server process' memory. Some of this memory is actually on the card, and some is system memory allocated for command/texture buffers, but most of it doesn't really exist. So if you see 80 or 150 MB in top, just be aware that it's not an accurate measure of resource usage.


How do I enable DMA?

The default driver configuration uses the CPU to send commands to the card. With direct memory access (DMA), the card fetches instructions directly from main memory. This frees the processor to do other work (such as calculating the next frame) so it tends to be much faster. There are two ways of doing these transfers: from static buffers pre-allocated at boot time, and from dynamic buffers through the AGP interface. This section only addresses the former. See the next section for info on AGP transfers.

You need to reserve anywhere from 8-32 megs of memory to enable DMA. For example, to reserve 8 megs of memory on a 128M system you would add the following line to /etc/lilo.conf:

append="mem=120M"

then run 'lilo' as root to write the settings to the hard drive.

Now you need to tell the glx module how much memory you reserved. Add the following lines to /etc/X11/glx.conf


	    mga_dma=3
	    mga_dmaadr=120
	    mga_dmasize=8
   

mga_dma - set this to 3 for maximum performance.

mga_dmaadr - the value of the mem= line you set in lilo.conf

mga_dmasize - the amount of ram you reserved for DMA buffers, specified in megabytes.

You may also want to set mga_systemtexture=1 to turn on texturing from main memory.

A longer explanation of these variables are included in the sample glx.conf

Note: The riva driver currently doesn't support DMA.

Note: If you have an ATI Rage Pro you would replace mga with mach64 in the above


How do I enable AGP support?

MGA and ATI specific

To use the AGP features of your card, we use the 'agpgart' kernel module, which acts as a sort of "AGP driver" for you motherboard's chipset. Loading this module into the kernel will allow the glx module to program the GART (graphics aperture relocation table?) registers with appropriate values to transfer commands to the card.

agpgart is included in late 2.3.x kernels. If you want to use it with 2.2.x you will need to get it manually:

cvs -z3 checkout -P newagp

then follow the instructions in the README.

Basically, load the module and set mga_dmaadr=agp to enable agp transfers. If it works, you will no longer need the "append mem=" line in lilo.conf.


Why doesn't the Riva driver support DMA/AGP?

Basically because nVidia hasn't given us the documentation we need to add this to the driver, nor have they released such a driver themselves. Myers Carpenter had this to say:

Here's where the problem lies: While nVidia "opened" up and came out with the X server and glx mod for their cards, they havn't "opened" up their specs. We can do stuff to the code they have given us sure, but the problem is we don't know how to do stuff like do AGP/DMA i/o with the cards. They havn't told us how to talk to the cards and do this. They have released their "Resource Manager" (this is what a nVidia programmer termed as in an email to the glx-dev list) which is basicly a layer of software that you can communicate to the cards and do stuff AGP/DMA i/o (or at least that is what I've gathered. Would I be wrong in saying it's kind of like Glide for the Voodoo cards, but even more low-level?) ... *BUT* ... (this is a really big but), not only have they released it as preprocessed (ie. they made it nearly impossible to be of any real use to linux hackers without spending a lot of time reverse engineering it.), but also as a kernel module that I have yet to hear of a single person being able to compile/run (I've tried too).

The linux-nvidia list is another, more evangelical, source of information on these topics.


I get a "Bad Request" error..

Please post a report to the mailing list. Include information such as whether you compiled from cvs, installed a binary, what app, etc.


It crashes my X Server!

There's basically two things you want to be able to do at this point:

Recover from a lockup

If the X Server faults, and drops you back to the command prompt, consider yourself lucky :) Often, the machine will lock up hard, leaving you little choice but to reboot. Sometimes you can telnet in (if you have a second machine and a network). Other times, pressing ctrl-alt-del will safetly reboot the machine. If you compile your own kernel, you can use the sysreq key to recover some control and reboot cleanly even if the server leaves the console in an unusable state (see linux/Documentation/sysrq.txt for more info).

Capture useful data

Capturing useful information about what caused the crash. This is a combination of your system setup, how the module was compiled, and what OpenGL/GLX calls caused the crash. Currently, the module will log information to a file, glx_debug.log, in /var/log. If it's a reasonable size, please post this file (or a relevant portion) when reporting a crash. By default, this file will only have basic but urgent messages, such as compile-time options to the module, or if a serious error was recognized at run time. By setting an environment variable before starting the X Server, more information can be logged. Be very careful with this. Setting the variable to 3 or 4 can cause a tremendous amount of excessive data to be printed out (we're talking whole Vertex Arrays here). Unless you know what you're doing, only set this variable to 1 or 2. More information is in the file glx/docs/debug.txt.

Also, application information can be extremely useful when debugging a crash. If you're a programmer, isolating which OpenGL function calls that caused the crash can be extremely helpful. A test app will help insure the problem is fixed.

John Carmack also has this these debugging suggestions you should try.


Known Issues

XRacer filled up the swap file and crashed the server!

There is a known incompatibility between XRacer and Mesa cvs. See the entry on "a more stable branch" for a way to use Mesa 3.0.

Or fix the problem for us. Xracer generates very large display lists, and the current code in Mesa cvs doesn't handle it very well. See this discussion or the XRacer faq for details.


Further Information

What platforms are supported?

We've had reports of the driver working on alpha, ppc, and the x86 family of processors. It should be workable on any unix-like environment that runs XFree86.


Where's this mailing list you keep mentioning?

Most of the development discussions take place via email on the 'utah-glx-dev' list on lists.sourceforge.net.

To subscribe to the list, go to the utah-glx-dev info page and fill out the form toward the bottom.

The info page has a list archive and some other links as well. Older messages are archived under the name glx-dev.

The official project homepage is http://utah-glx.sourceforge.net/.

There is an irc channel as well: #glx on irc.openprojects.net.

Glossary

This is a general glossary of terms, intended to be helpful in understanding some of the jargon associated with the glx project and 3D graphics drivers in general.

Chipset

High density electronic circuits are often called 'chips' or 'microchips' because they're laid out on a small square of silicon or another thin, flat subtrate. That's what's embedded inside the plastic/ceramic squares inside your computer. 'Chipset' then, generally refers to a collection of microchips designed together to handle some task. In the context of graphic driver development, there are generally two senses used. Specifically, 'chipset' can refer to the particular acceleration engine used in your graphics card. (Things like 'ATI Rage Pro', 'Voodoo 3', or 'Matrox G400'.) This is usually what is meant when someone asks "what driver?" or "what chipset are you using?" This term is also (more generally) used to refer to the larger chips on your motherboard which interface between the main processor, memory, and the various peripherals. If you're talking about, e.g., the agpgart kernel driver for dma support, this is the chipset people will be asking about. (Things like '440BX' or 'VIA MVP3'.) These days, the 'chipset' may well be integrated into a single package, but the historical usage persists.

Direct Memory Access

Direct Memory Access is a way of allowing a peripheral device to access data in main memory directly. Enabling this feature where possible offers a significant performance improvements in the context of graphics (and most other) drivers. This allows the cpu to simply tell the graphics accelerator where it's stored the data to be displayed and get on with calculating the next frame, rather than spending time transferring the the data directly to the card a few bytes at a time.

Programmed Input/Output

Programmed (or Processor) I/O is a mode of transferring data to the graphics card (or any other peripheral) that a driver can use. It consists of having the cpu write the data directly to the card, a few bytes at a time. This is generally not one of the faster ways to proceed--the opposite of DMA mode in that sense.

Psuedo DMA

The Matrox graphics adapters provide a handy method of data transfer for driver writers. It's sort of a hybrid of pio and dma modes, allowing one to write commands and data (pio mode) but in the same format as the card expects for data that is to be transferred via dma. Great for debugging since it's harder to lock up the system in pio mode, but you can use the same code to generate the commands as the full-speed dma driver.

Memory Type Range Registers

Some processors, notably the Intel P6 family and clones, support what are called memory type range registers or mtrrs, which allow you to configure the way the processor accesses a particular section of memory. This is relevant to graphics drivers in that enabling a mode called 'write combining' where successive stores are accumulated and then written in a burst can increase transfer speed my a factor of two or more, greatly increasing the amount of data that can be sent to the graphics hardware in a given time and so the general performance of the driver.

See for example mtrr.txt in the Linux kernel documentation for more information.