April 2010

You are currently browsing the monthly archive for April 2010.

I was waiting around a bit for my younger son’s doctor’s appointment this morning, so I decided to edit a book. I finished it just now, it’s called Another Introduction to Ray Tracing. It’s 471 pages in book form. You can download it for free, or order a paperback copy from PediaPress for $22.84 plus shipping. I won’t earn a dime from it, but since it took me less than two hours to make, no problem.

So what’s happening here? Due to investigating Alphascript and Betascript publishing a month ago, reporting it on Slashdot, and following up on a lot of great comments, I learnt a number of interesting tidbits. Here’s a rundown.

First, VDM Publishing itself is sort of a vanity press, but with no cost to the author. It seeks out authors of PhD theses and similar, asking for permission to publish. This is not all that unreasonable: because the works are only published on demand, the authors do not have to pay anything, they even get a few hardcopies for free. Here’s an example from our field that I reported on in February. That said, it’s mostly a win for VDM Publishing, who charge steep prices for the resulting works. Such not-quite-books mix in with other books on Amazon. It takes a bit of searching to realize that the work is a thesis and likely could be downloaded for free. A bit misleading, perhaps, but not all that horrifying. Caveat Emptor.

VDM Publishing also has an imprint called LAP, Lambert Academic Press, which does the same thing, publishing theses such as this one by Nasim Sedaghat. With a little Googling you can find Nasim, and then find the related paper for free.

VDM’s imprints Alphascript and Betascript Publishing I’ve already described, they’re little more than random repackagers of Wikipedia articles. Here’s an example book. I posted one-star reviews for a few of these books on Amazon; what’s funny is that the owner of the firm actually responded to my criticism (with a one-size-fits-all response in slightly broken English).

Four weeks ago Alphascript had 38,909 and Betascript 18,289 books listed on Amazon. To my surprise they now have 39,817 and?18,295 books, a total increase of only (only!) 914 new books – looks like they’re slowing down. They’ll have to work hard to catch up with Philip M. Parker’s 107,182 books or his publishing firm ICON Group International, with 473,668 books.?The New York Times has an interesting article about this guy.

Betascript Publishing has two books found on Amazon related to ray tracing: Ray Tracing (Graphics) and Rasterization (which includes a section on ray tracing). The ray tracing book is 88 pages long and $46, more than 50 cents a page. My book, at $22.84 for 471 pages, is less than a nickel a page. So my new book’s better per pound. I actually worked a little compiling my book, making logical groupings, picking relevant articles, creating chapter headings, the whole nine yards (never did figure out how to make a cover from an existing Wikipedia image, though). The exercise showed me the limits of Wikipedia as a book-making resource: the individual articles are fine for what they are, some are wonderful, and editing them in a somewhat logical flow has some merit. However, there’s no coherence to the final product and there are large gaps between one article and the next. How to generate rays for a given camera? Sorry, not in my book.

Still, it was great to learn of PediaPress and the ability to make my own Wikipedia book for free. Poking around their site, I even found a book on 3D computer graphics, called 3D Computer Graphics (catchy, neh?). Seeing others making books, I decided to share my own, so now it’s official. Mind you, I haven’t actually read through my book, nor even really checked the flow of articles – no time for that. I mostly grouped by subject and title after identifying likely pages. That said, I do like having a PDF file of all these articles that I can search through.

Obviously authors are not about to be replaced by Betascript books any time soon. If you want to read a real introduction to the topic, a book like Ray Tracing from the Ground Up might serve you better, even if it is a whole dime a page. This cost/benefit ratio for a good book is something I’ll never get over, that books are sold at prices that are equivalent to the cost for just an hour or two for a computer programmer’s time and yet yield so much in the right hands.

Tags: , ,

Quite the backlog, so let’s whip through some topics:

  • GDC: ancient news, I know, but here is a rundown from?Vincent Scheib and a summary of trends from Mark DeLoura.
  • I like when people revisit various languages and see how fast they now are on newer hardware and more efficient implementations. Case in point: Quake 2 runs in a browser using javascript and?WebGL.
  • Morgan McGuire pointed out this worthwhile article on stereoscopic graphics programming. Quick bits: frame tearing is very noticeable since it is visible to only one eye, vsync is important which may force lower-res rendering, making antialiasing all that much more important. UI elements on top look terribly 2D, and aim-point UI elements need to be given 3D depths. For their game MotorStorm, going 3D meant a lot more people liked using the first-person view, and this view with stereo helped perception of depth, obstacles, etc. There are also some intriguing ideas about using a single 2D image and reprojection using the depth buffer to get the second image (it mostly works…).
  • I happened to notice ShaderX 7 is now available on the Kindle. Looking further, quite a few other recent graphics books are. What’s odd is the differential in prices varies considerably: a Kindle ShaderX 7 is only $3.78 cheaper, while Fundamentals of Computer Graphics is $20 less.
  • Speaking of ShaderX, its successor GPU Pro is not out yet, but Wolfgang started a blog about it (really, just the Table of Contents), in addition to his other blog. The real news: you can use Amazon’s Look Inside feature to view the contents of the book right now!
  • Here are way too many multithreading resources.
  • In case you somehow missed it, you must see Pixels.

Tags: , , ,

I recently spent a weekend in downtown LA helping the SIGGRAPH 2010 committee put together the conference schedule.? Looking at the end result from a game developer’s perspective, this is going to be a great conference! More details will be published in early May, but you can see the emphasis on games already; of the current (partial) list of courses, over half have high relevance to games.

If you are a game developer, we need your participation to help make this the biggest game SIGGRAPH ever! A few months ago I posted about the February 18th deadline. That deadline is long gone, but several venues are still open. This is your chance to show off not just in front of your fellow game developers, but also before the leading film graphics professionals and researchers. The most relevant venues for game developers are:

  1. Live Real-Time Demos. The Electronic Theater, a nightly showcase of the best computer graphics clips of the year, has long been a SIGGRAPH highlight and the tentpole event of the Computer Animation Festival (which is an official qualifying festival for the Academy Awards). The Electronic Theater is shown on a giant screen in the largest convention center hall, before an audience packed with the world’s top computer graphics professionals and researchers. Last year SIGGRAPH introduced a new event before the Electronic Theater to showcase the best real-time graphics of the year. The submission deadline for Live Real-Time Demos is April 28th (a week and a half away), so time is short! Submitting your game to Live Real-Time Demos is as simple as uploading about 5 minutes of captured game footage (all submitted materials are held in strict confidentiality) and filling out a short online form. If you want your game submitted, please let your producer know about this ASAP; it will likely take some time to get approval.
  2. SIGGRAPH Dailies! (new for 2010) is where the artists get to shine; details here, including cool example presentations from Pixar. Other SIGGRAPH programs present graphics techniques; ‘SIGGRAPH Dailies!’ showcases the craft and artistry with which these techniques are applied. All excellent production art is welcome: characters, animations, level lighting, particle effects, etc. Each artist whose work is selected will get two minutes at SIGGRAPH to show a video clip of their work and tell an interesting story about creating it. The submission deadline for ‘SIGGRAPH Dailies!’ is May 6th. Submitting art to Dailies is just a matter of uploading 60-90 seconds of video and filling out an online form. If your studio is planning to submit more than one or two Dailies, you should use the batch submission process: designate a representative (like an art director or lead) to recruit presentations and get producer approval. Once the representative has a tentative list of submissions, they should contact SIGGRAPH (click this link and select ‘SIGGRAPH Dailies’ from the drop down menu) to give advance warning of the expected submission count. After all entries have video clips and backstory text files, the studio representative contacts SIGGRAPH again to coordinate a batch submission.
  3. Late-Breaking Talks. Although the initial talk deadline is past, there is one more chance to submit talks: the late-breaking deadline on May 6th. SIGGRAPH talks are 20-minute presentations, typically about practical, down-to-earth film or game production techniques. If you are a graphics programmer or technical artist, you must have developed several such techniques while working on your last game. If there is one you are especially proud of, consider submitting a Talk about it; this only requires a one-page abstract (if you happen to have video or additional documentation you can add them as supplementary material). To show the detail expected in the abstract and the variety of possible talks here are five abstracts from 2009: a game production technique, a game system/API, a game rendering technique, a film effects shot, and a film character.

Presenting at one of these forums is a professional opportunity well worth the small amount of work involved. Forward this post to other people on your team so they can get in on the fun!


Since my recent post discussing the antialiasing method used in God of War III, Cedric Perthuis (a graphics programmer on the God of War III development team) was kind enough to email some additional details on how the technique was developed, which I will quote here:

“It was extremely expensive at first. The first not so naive SPU version, which was considered decent, was taking more than 120 ms, at which point, we had decided to pass on the technique. It quickly went down to 80 and then 60 ms when some kind of bottleneck was reached. Our worst scene remained at 60ms for a very long time, but simpler scenes got cheaper and cheaper. Finally, and after many breakthroughs and long hours from our technology teams, especially our technology team in Europe, we shipped with the cheapest scenes around 7 ms, the average Gow3 scene at 12 ms, and the most expensive scene at 20 ms.

In term of quality, the latest version is also significantly better than the initial 120+ ms version. It started with a quality way lower than your typical MSAA2x on more than half of the screen. It was equivalent on a good 25% and was already nicer on the rest. At that point we were only after speed, there could be a long post mortem, but it wasn’t immediately obvious that it would save us a lot of RSX time if any, so it would have been a no go if it hadn’t been optimized on the SPU. When it was clear that we were getting a nice RSX boost ( 2 to 3 ms at first, 6 or 7 ms in the shipped version ), we actually focused on evaluating if it was a valid option visually. Despite of any great performance gain, the team couldn’t compromise on quality, there was a pretty high level to reach to even consider the option. And as for the speed, the improvements on the quality front were dramatic. A few months before shipping, we finally reached a quality similar to MSAA2x on almost the entire screen, and a few weeks later, all the pixelated edges disappeared and the quality became significantly higher than MSAA2x or even MSAA4x on all our still shots, without any exception. In motion it became globally better too, few minor issues remained which just can’t be solved without sub-pixel sampling.

There would be a lot to say about the integration of the technique in the engine and what we did to avoid adding any latency. Contrarily to what I have read on few forums, we are not firing the SPUs at the end of the frame and then wait for the results the next frame. We couldn’t afford to add any significant latency. For this kind of game, gameplay is first, then quality, then framerate. We had the same issue with vsync, we had to come up with ways to use the existing latency. So instead of waiting for the results next frame, we are using the SPUs as parallel coprocessors of the RSX and we use the time we would have spent on the RSX to start the next frame. With 3 ms or 4 ms of SPU latency at most, we are faster than the original 6ms of RSX time we saved. In the end it’s probably a wash in term of latency due to some SPU scheduling consideration. We had to make sure we could kick off the jobs as soon as the RSX was done with the frame, and likewise, when the SPU are done, we need the RSX to pick up where it left and finish the frame. Integrating the technique without adding any latency was really a major task, it involved almost half of the team, and a lot of SPU optimization was required very late in the game.”

“For a long time we worked with a reference code, algorithm changes were made in the reference code and in parallel the optimized code was being optimized further. the optimized version never deviated from the reference code. I assume that doing any kind of cheap approximation would prevent any changes to the algorithm. There’s a point though, where the team got such a good grip of the optimized version that the slow reference code wasn’t useful anymore and got removed. We tweaked some values, made few major changes to the edge detection code and did a lot of testing. I can’t stress it enough. every iteration was carefully checked and evaluated.”

So it looks like my first impression of such techniques – that they are too expensive to be feasible on current consoles – was not that far off the mark; I just hadn’t accounted for what a truly heroic SPU optimization effort could achieve. I wonder what other graphics techniques could be made fast enough for games, given a similar effort?

Tags: ,

See what was accepted at Ke-Sen’s site, which undoubtedly will grow in the number of paper links over the ensuing weeks as authors put their preprints up for download. So far, this is the one that’s #1 in the “just plain fun” category.


I received my ACM SIGGRAPH 2010 Election form today, it provides some login info and a PIN. SIGGRAPH members can vote for up to three people for the Director-At-Large positions.

I can be pretty apathetic about these sorts of elections, ACM and IEEE, I have to admit. Sometimes I’ll get inspired and read the statements, sometimes I’ll skim, sometimes I’ll just vote for names I know, sometimes I’ll ignore the whole thing. This year’s ACM SIGGRAPH election is different for me, because of issues brought up by the Ke-Sen Huang situation. Specifically, the ACM’s copyright policy is lagging behind the needs of its members.

For this SIGGRAPH election I was happy to see that James O’Brien is on the slate. In the past James has worked to retain the rights to his own images, so he’s aware of the issues. In his election statement he writes:

The ACM Digital Library has been a great success, but the move to digital publishing has created conflicts between ACM and member interests. ACM and SIGGRAPH are fundamentally member service organizations and I believe that through thoughtful and progressive copyright policies we can better align organization and member needs. Successful copyright policy has to work across formats, and SIGGRAPH is unique among ACM SIGs in that member-generated content spans a diverse range encompassing text, images, and video. Other organizations have embraced Open Access initiatives, but SIGGRAPH and ACM should be leading the way in this area.

He has my vote. He’s also the only candidate who addresses this area of concern, and in a thoughtful and professional manner. If you’re a SIGGRAPH member, I hope you’ll take the time this year to read over the statements, figure out your login ID and user number, and then go vote.

Tags: , , ,

From here, no idea who made it; I’d like to shake his hand. Found here, but the poster found it via StumbleUpon.

HDR in games

Having seen way too much bloom on the Atacama Desert map for the Russian pilots in Battlefield: Bad Company 2 (my current addiction, e.g. this and this), I can relate.

Edit: please do read the comments for why both pairs of images are wrong. I’m so used to HDR == tone mapping that I pretty much forgot the top pair is also technically incorrect (HDR is the range of the data, tone mapping can take such data and map it nicely to a displayable 8-bit image without banding) – thank you, Jaakko.

Tags: , ,

Shortly after publication of the first volume, Eric Lengyel is now calling for chapters for the second volume of this series, due to ship at GDC 2011.? Details can be found at the book website.? The first book turned out pretty good, so this might be a good one to contribute to.

Tags: ,

RTR in 3D

All of Google Books are in 3D today, even the excerpts from our second edition:

RTR in 3D

Tags: , ,