Programming

You are currently browsing the archive for the Programming category.

I really enjoyed Yann Esposito’s slides “Category Theory Presentation” which give a relatively simple and artistic introduction to category theory for Haskell programmers.  (I like Haskell too.)  His slides present the basic definitions of categories, functors, natural transformations, and monads along with working Haskell code.

Here is a fragment of a typical slide.

 

composition

One of my friends showed me his new gaming computer and said that the GPU could do 1.3 teraflops (1.3 trillion floating point operations per second) which is about 500 times faster than my home computer, so I thought, “Imagine how quickly we could search a game tree.”  So I started looking around the internet for a super-great GPU chess engine and found basically nothing!!  Turns out that the amount of memory per thread is too low, the size of the L1 cache is too small, and the alpha-beta pruning algorithm is not quite parallel enough for GPUs to play chess well.  Here is a nice graphic of the L1 access time for a few CPUs and GPUs.

 

In the paper “Parallel Game Tree Search Using GPU” (2011), L’ubomír Lackovic improved the tree search speed by a factor of two to three by using a GPU instead of the more traditional CPU based tree search for Czech draughts (similar to American Checkers).  His tests were based on the ATI Radeon 4890 GPU, the Nvidia GTX460 GPU, and the quad-core processor Intel i5 750 CPU.  I had hoped for more speed.

In “Large-Scale Parallel State Space Search Utilizing Graphics Processing Units and Solid State Disks” (2011), von Damian Sulewski invented and test several algorithms for search, reviewed game theory algorithms, and applied GPU processing to several games including “Nine Men’s Morris“.  Sulewski used an Intel Core i7 CPU 920 with an NVIDIA GeForce 285 GTX GPU to run his tests. He reported that the GPU was faster by a factor of three to twelve as long as sufficient RAM was available.  If the computer ran out of RAM and had to use disk storage, then the GPU performance degraded significantly.  He states,

 

“The observed speed-ups of over one order of magnitude have been obtained (plotted in bold font), exceeding the number of cores on most current PCs. Note that this assertion is true for the dual 6-core CPUs available from Intel, but not on a dual Xeon machine with two quad-core CPUs creating 16 logical cores due to multi-threading. Nonetheless, better speed-ups are possible since NVIDIA GPUs can be used in parallel and the Fermi architecture (e.g. located on the GeForce GTX 480 graphics card) is coming out which will go far beyond the 240 GPU cores we had access to.

For larger levels, however, we observe that the GPU performance degrades. When profiling the code, we identified I/O access as one limiting factor. For example, reading S8,8 from one HDD required 100 seconds, while the expansion of 8 million states, including ranking and unranking required only about 1 second on the GPU.”

 

So GPU’s are not the silver bullet for games yet.

 

“A probabilistic programming language is a high-level language that makes it easy for a developer to define probability models and then “solve” these models automatically.  These languages incorporate random events as primitives and their runtime environment handles inference. Now, it is a matter of programming that enables a clean separation between modeling and inference.”

writes Beau Cronin in this post about Probabilistic Programming (PP) (see e.g. [1] and [2]).  He goes on to informally describe a probabilistic graphical model (PGM) and how PP languages or extensions to existing languages like BLOG, BUGSChurch (Lisp), FACTORIEFigaro (Scala), HANSEI (Ocaml), Hierarchical Bayesian CompilerInfer.NETProbLog, and Stan make it much easier to set up and solve PGMs.  He provides links to a tutorial, a great easy-to-understand video, the NIPS workshop on PP, and several ongoing PP projects.

I also enjoyed the more detailed post “Why Probabilistic Programming Matters” by Rob Zinkov.  Rob shows how to represent the following machine learning techniques in a PP language:

 

John Regehr writes this post about a compiler speed optimization technique called “Stochastic Superopimization“.  “Stochastic Superopimization” systematically searches for algorithmic improvement in code using machine learning algorithms similar to multi-armed bandit strategies.  It appears to be related to “Programming by Optimization“.  “Stochastic Superopimization” is more like a very good optimization flag on a compiler.  “Programming by Optimization” is constructing the program in such a fashion that design options are exposed and easily manipulable by an optimization program trying to maximize some performance metric.  The “Programming by Optimization” community seems to mostly use BOA, the Bayesian optimization algorithm (see [1], [2]).  I am hoping to read and write more about both of these ideas later.

In enjoyed reading Scott Porad‘s abbreviated post on Scrum.

My Friend Mark K pointed me toward these resources for Scrum  agile software development. (See also fueled.com)

 

The Scrum Framework in 10 minutes:
http://www.youtube.com/watch?v=_BWbaZs1M_8

Jeff Sutherland videos (the inventor of scrum)

Discusses His Vision for Scrum 3:00 http://www.youtube.com/watch?v=LjBN2CjKDcU

The Structure of Scrum 5:37 http://www.youtube.com/watch?v=1RmCahV3Tbw&feature=relmfu

How does Scrum Really Work?  3:31 http://www.youtube.com/watch?v=eIyaCPcUuyQ&feature=relmfu
What Is the Product Backlog Review in Scrum? 1:54 http://www.youtube.com/watch?v=iwkb56GQg9Q&feature=relmfu

What does it mean to be Ready-Ready? 3:30 http://www.youtube.com/watch?v=XkhJDbaW0j0&feature=relmfu
What Is the Scrum Daily Meeting?  2:09 http://www.youtube.com/watch?v=lXOhfKV6jLQ&feature=relmfu
Inside the Sprint Retrospective 2:30 http://www.youtube.com/watch?v=MFLvQXMNrO8&feature=relmfu
Understanding the Burndown Chart in Scrum 3:48 http://www.youtube.com/watch?v=HV76WzqpSI0&feature=relmfu
Best practices for Scrum  1:47 http://www.youtube.com/watch?v=jQULZDTDG8Q&feature=relmfu
The Nokia Test 6:14 http://www.youtube.com/watch?v=1yZ3J8C4MK0&feature=relmfu

 

http://www.risc.jku.at/people/ckoutsch/stuff/e_algorithms.html

http://www.uta.edu/faculty/rcli/TopTen/topten.pdf

 

If you use the integer detection relation algorithm, please leave a comment on how you used it.  (The Euclidean algorithm does not count!)

I think the fast multipole algorithm is used by electrical engineers.  If you use it, please leave a comment.

It occurs to me that when designing an algorithm for a large organization the designer should minimize
$$\alpha T + \beta L + \gamma H$$
where $\alpha$, $\beta$, and $\gamma$ are unknown positive constants, $T$ is the time your code needs to run, $L$ is the number of lines of code required, and $H$ is the number of hours required by other engineers to understand your algorithm.

PS: The above statement is considerably less funny if you try to estimate $\alpha$, $\beta$, and $\gamma$.