<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial Intelligence Blog &#187; Artificial Intelligence Blog &#187; Languages</title>
	<atom:link href="http://artent.net/category/languages/feed/" rel="self" type="application/rss+xml" />
	<link>http://artent.net</link>
	<description>We&#039;re blogging machines!</description>
	<lastBuildDate>Sat, 14 Mar 2026 20:14:25 +0000</lastBuildDate>
	<language>en-US</language>
		<sy:updatePeriod>hourly</sy:updatePeriod>
		<sy:updateFrequency>1</sy:updateFrequency>
	<generator>http://wordpress.org/?v=4.0</generator>
	<item>
		<title>“Category Theory for Programmers”</title>
		<link>http://artent.net/2020/07/14/category-theory-for-programmers/</link>
		<comments>http://artent.net/2020/07/14/category-theory-for-programmers/#comments</comments>
		<pubDate>Tue, 14 Jul 2020 10:15:54 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Languages]]></category>
		<category><![CDATA[Math]]></category>

		<guid isPermaLink="false">http://artent.net/?p=2926</guid>
		<description><![CDATA[I’ve been reading “Category Theory for Programmers” which was suggested to me by Mark Ettinger.  This book presents many examples in C++ and Haskell.  It teaches you some Haskell as you read the book.  It uses almost zero upper level mathematics and it skips almost all of the mathematical formalities.  If you decide that you [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I’ve been reading “<a href="https://bartoszmilewski.com/2014/10/28/category-theory-for-programmers-the-preface/">Category Theory for Programmers</a>” which was suggested to me by Mark Ettinger.  This book presents many examples in C++ and Haskell.  It teaches you some Haskell as you read the book.  It uses almost zero upper level mathematics and it skips almost all of the mathematical formalities.  If you decide that you want to read it, then you might want to read the first six chapters of &#8220;<a href="http://learnyouahaskell.com">Learn You a Haskell for Great Good!</a>” and write a few small Haskell programs first.  (I also would suggest trying to solve the first three problems in Project Euler <a href="https://projecteuler.net/archives">https://projecteuler.net/archives</a>  using Haskell.)</p>
<div></div>
<div>
<div>I find the book to be easy to read and informative.  When the author makes a statement like   A*(B+C) = A*B + A*C where * means categorical product, + means coproduct, and = means isomorphic, I find myself trying to figure out the categories where the statement is true and the categories for which it is false.  (It is true for the Category of Set and the Category Hask.  The book is mostly about those categories.) That type of thinking improves my understanding of category theory.  The book is also reawakening the parts of my brain that had forgotten parts of category theory and Haskell.</div>
<div></div>
<div></div>
<p>Interestingly, in category theory, $A*(B+C) = A*B + A*C$ can be translated into the following theorems :</p>
<ol>
<li> A*(B+C) = A*B + A*C  is true for all positive integers A,B, and C,</li>
<li>max(A, min(B,C)) = min( max(A,B), max(A,C))  for all real numbers A, B, and C,</li>
<li>lcm(A, gcd(B,C)) = gcd( lcm(A,B), lcm(A,C) )   where  lcm means least common multiple and gcd means greatest common denominator, and</li>
<li>intersection(A, union(B,C)) = union( intersection(A,B), intersection(A, C)).</li>
</ol>
<div>
<div>If you don’t believe the four theorems, here is some Mathematica Code which tests each theorem:</div>
<div></div>
<pre>Unprotect[C];
test[ funcRandChoose_, prod_, sum_, i_] := Tally[ Table[ 
     A = funcRandChoose[];
     B = funcRandChoose[];
     C = funcRandChoose[];
      prod[ A, sum[B, C]] == sum[ prod[A, B] , prod[A, C]], 
 {i}]];

test[ RandomInteger[{1, 1000}] &amp;, Times, Plus, 100]
test[ RandomInteger[{-1000, 1000}] &amp;, Max, Min, 100]
test[ RandomInteger[{-1000, 1000}] &amp;, LCM, GCD, 100]
test[ RandomSample[ Subsets[ Range[5]]] &amp;, Intersection, Union, 100]</pre>
<div></div>
</div>
</div>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2020/07/14/category-theory-for-programmers/feed/</wfw:commentRss>
		<slash:comments>1</slash:comments>
		</item>
		<item>
		<title>The Best Deep Learning Blog Post Ever  (Christopher Olah)</title>
		<link>http://artent.net/2014/07/23/the-best-deep-learning-blog-post-ever-christopher-olah/</link>
		<comments>http://artent.net/2014/07/23/the-best-deep-learning-blog-post-ever-christopher-olah/#comments</comments>
		<pubDate>Wed, 23 Jul 2014 12:20:21 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Clustering]]></category>
		<category><![CDATA[Deep Belief Networks]]></category>
		<category><![CDATA[Languages]]></category>
		<category><![CDATA[Neural Nets]]></category>

		<guid isPermaLink="false">http://artent.net/?p=2529</guid>
		<description><![CDATA[&#160; Christopher Olah wrote an incredibly insightful post on Deep Neural Nets (DNNs) titled &#8220;Deep Learning, NLP, and Representations&#8220;.  In his post, Chris looks at Deep Learning from a Natural Language Processing (NLP) point of view.  He discusses how many different deep neural nets designed for different NLP tasks learn the same things.   According [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/"><img class="aligncenter" src="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/img/Socher-ImageClassManifold.png" alt="" width="1514" height="999" /></a></p>
<p>&nbsp;</p>
<p><a href="http://colah.github.io/">Christopher Olah</a> wrote an incredibly insightful post on <a href="http://en.wikipedia.org/wiki/Deep_learning">Deep Neural Nets</a> (DNNs) titled &#8220;<a href="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/">Deep Learning, NLP, and Representations</a>&#8220;.  In his post, Chris looks at Deep Learning from a <a href="http://en.wikipedia.org/wiki/Natural_language_processing">Natural Language Processing</a> (NLP) point of view.  He discusses how many different deep neural nets designed for different NLP tasks learn the same things.   According to Chris and the many papers he cites, these DNNs will automatically learn to intelligently embed words into a vector space.  Words with related meanings will often be clustered together.  More surprisingly, analogies such as &#8220;France is to Paris as Italy is to Rome&#8221; or &#8220;Einstein is to scientist as Picasso is to Painter&#8221; are also learned by many DNNs when applied to NLP tasks.  Chris reproduced the chart of analogies below from &#8220;<a href="http://arxiv.org/pdf/1301.3781.pdf">Efficient Estimation of Word Representations in Vector Space</a>&#8221; by Mikolov, Chen, Corrado, and Dean (2013).</p>
<div style="width: 1010px" class="wp-caption alignnone"><img src="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/img/Mikolov-AnalogyTable.png" alt="" width="1000" height="430" /><p class="wp-caption-text">Relationship pairs in a word embedding. From Mikolov et al. (2013).</p></div>
<p>Additionally, the post details the implementation of recurrent deep neural nets for NLP.  Numerous papers are cited, but the writing is non-technical enough that anyone can gain insights into how DNNs work by reading Chris&#8217;s post.</p>
<p>So why don&#8217;t you just read it like <a href="http://colah.github.io/posts/2014-07-NLP-RNNs-Representations/">NOW  &#8212; CLICK HERE</a>.   <img src="http://artent.net/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley" /></p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2014/07/23/the-best-deep-learning-blog-post-ever-christopher-olah/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>&#8220;Matrix Factorizations and the Grammar of Life&#8221;</title>
		<link>http://artent.net/2013/03/22/matrix-factorizations-and-the-grammar-of-life/</link>
		<comments>http://artent.net/2013/03/22/matrix-factorizations-and-the-grammar-of-life/#comments</comments>
		<pubDate>Fri, 22 Mar 2013 14:53:16 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Abstraction for Learning]]></category>
		<category><![CDATA[General ML]]></category>
		<category><![CDATA[Languages]]></category>
		<category><![CDATA[Sparsity]]></category>
		<category><![CDATA[Statistics]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=1680</guid>
		<description><![CDATA[I&#8217;m quite excited by the Nuit Blanche post on the papers &#8220;Structure Discovery in Nonparametric Regression through Compositional Kernel Search&#8221; (Duvenaudy, Lloydy, Grossez, Tenenbaumz, Ghahramaniy 2013) and &#8220;Exploiting compositionality to explore a large space of model structures&#8221; (Grosse, Salakhutdinovm, Freeman, and Tenenbaum 2012).  For years my old company Momentum Investment Services, Carl, and I have been looking for fast, systematic [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>I&#8217;m quite excited by the <a href="http://nuit-blanche.blogspot.com/2013/03/sunday-morning-insight-matrix.html?utm_source=feedburner&amp;utm_medium=feed&amp;utm_campaign=Feed%3A+blogspot%2FwCeDd+%28Nuit+Blanche%29&amp;utm_content=Google+Reader">Nuit Blanche post</a> on the papers &#8220;<a href="http://arxiv.org/pdf/1302.4922.pdf">Structure Discovery in Nonparametric Regression through Compositional Kernel Search</a>&#8221; (Duvenaudy, Lloydy, Grossez, Tenenbaumz, Ghahramaniy 2013) and &#8220;<a href="http://people.csail.mit.edu/rgrosse/uai2012-matrix.pdf">Exploiting compositionality to explore a large space of model structures</a>&#8221; (Grosse, Salakhutdinovm, Freeman, and Tenenbaum 2012).  For years my old company Momentum Investment Services, Carl, and I have been looking for fast, systematic ways to search large hypothesis spaces.  We considered context-free grammars as a means of generating hypothesis.  Carl and I did not get anywhere with that research, but now it seems that others have succeeded.  Be sure to look over the article, the blog posts, and the comments.</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2013/03/22/matrix-factorizations-and-the-grammar-of-life/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Type Inference in Scala is Turing Complete</title>
		<link>http://artent.net/2013/02/28/type-inference-in-scala-is-turing-complete/</link>
		<comments>http://artent.net/2013/02/28/type-inference-in-scala-is-turing-complete/#comments</comments>
		<pubDate>Thu, 28 Feb 2013 13:28:01 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Languages]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=1499</guid>
		<description><![CDATA[Check out &#8220;Types Gone Wild! SKI at Compile-Time&#8221; and &#8220;Turing Equivalent vs. Turing Complete&#8221;  at Good Math, Bad Math &#8220;Scala type level encoding of the SKI calculus&#8221; at Michid’s Weblog; and &#8220;C++ templates Turing-complete?&#8221; and &#8220;The type system in Scala is Turing complete. Proof? Example? Benefits?&#8221; at stackoverflow.]]></description>
				<content:encoded><![CDATA[<p>Check out</p>
<ul>
<li>&#8220;<a style="line-height: 1.4" href="http://scientopia.org/blogs/goodmath/2012/06/03/numeric-pareidolia-and-vortex-math/">Types Gone Wild! SKI at Compile-Time</a>&#8221; and &#8220;<a style="line-height: 1.4" href="http://scienceblogs.com/goodmath/2007/01/05/turing-equivalent-vs-turing-co/">Turing Equivalent vs. Turing Complete</a>&#8221;  at <a style="line-height: 1.4" href="http://scientopia.org/blogs/goodmath/">Good Math, Bad Math</a></li>
<li>&#8220;<a style="line-height: 1.4" href="http://michid.wordpress.com/2010/01/29/scala-type-level-encoding-of-the-ski-calculus/">Scala type level encoding of the SKI calculus</a>&#8221; at <a style="line-height: 1.4" href="http://michid.wordpress.com/">Michid’s Weblog</a>; and</li>
<li>&#8220;<a style="line-height: 1.4" href="http://stackoverflow.com/questions/189172/c-templates-turing-complete">C++ templates Turing-complete?</a>&#8221; and &#8220;<a style="line-height: 1.4" href="http://stackoverflow.com/questions/4047512/the-type-system-in-scala-is-turing-complete-proof-example-benefits">The type system in Scala is Turing complete. Proof? Example? Benefits?</a>&#8221; at <a style="line-height: 1.4" href="http://stackoverflow.com/">stackoverflow</a>.</li>
</ul>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2013/02/28/type-inference-in-scala-is-turing-complete/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>&#8220;An  Estimate  of  an  Upper  Bound  for the  Entropy  of  English&#8221;</title>
		<link>http://artent.net/2013/02/04/an-estimate-of-an-upper-bound-for-the-entropy-of-english/</link>
		<comments>http://artent.net/2013/02/04/an-estimate-of-an-upper-bound-for-the-entropy-of-english/#comments</comments>
		<pubDate>Mon, 04 Feb 2013 13:37:09 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Information Theory]]></category>
		<category><![CDATA[Languages]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=1423</guid>
		<description><![CDATA[In the short, well written paper &#8220;An Estimate of an Upper Bound for the Entropy of English&#8220;, Brown, Stephan Della Pietra, Mercer, Vincent Della Pietra, and Lai (1992) give an estimated upper bound for English of 1.75 bits per character.  That estimate was somewhat lower than Shannon&#8217;s original upper bound of 2.3 bits per character. Along the way they give nice [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In the short, well written paper &#8220;<a href="http://acl.ldc.upenn.edu/J/J92/J92-1002.pdf">An Estimate of an Upper Bound for the Entropy of English</a>&#8220;, Brown, Stephan Della Pietra, Mercer, Vincent Della Pietra, and Lai (1992) give an estimated upper bound for English of 1.75 bits per character.  That estimate was somewhat lower than <a href="http://languagelog.ldc.upenn.edu/myl/Shannon1950.pdf">Shannon&#8217;s original upper bound</a> of 2.3 bits per character. Along the way they give nice simple explanations of <a href="http://en.wikipedia.org/wiki/Entropy">entropy</a> and <a href="http://en.wikipedia.org/wiki/Cross_entropy">cross-entropy</a> as applied to text.  More recently Montemurro and Zanette (2011) showed the entropy of all languages is around 3.5 bits per word. (see <a href="http://www.wired.com/wiredscience/2011/05/universal-entropy/">Wired Article</a> and <a href="http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019875">Plos One</a>)</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2013/02/04/an-estimate-of-an-upper-bound-for-the-entropy-of-english/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Type Inference and Type Theory for Julia (Video)</title>
		<link>http://artent.net/2012/12/14/great-julia-video/</link>
		<comments>http://artent.net/2012/12/14/great-julia-video/#comments</comments>
		<pubDate>Fri, 14 Dec 2012 07:42:56 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Languages]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=918</guid>
		<description><![CDATA[Julia can be written like Malab without typing information and it runs very fast, at nearly the speed of C, because it does runtime type inference and JIT compilation. Underneath it has sophisticated dynamic algebraic typing system which can be manipulated by the programmer (much like Haskell).  Carl sent me a link to this video about how the language achieves [&#8230;]]]></description>
				<content:encoded><![CDATA[<p><a href="http://julialang.org/">Julia</a> can be written like Malab without <a href="http://en.wikipedia.org/wiki/Type_system">typing</a> information and it runs very fast, at nearly the speed of C, because it does <em>runtime</em> <a href="http://en.wikipedia.org/wiki/Type_inference">type inference</a> and <a href="http://en.wikipedia.org/wiki/Just-in-time_compilation">JIT compilation</a>. Underneath it has sophisticated dynamic <a href="http://en.wikipedia.org/wiki/Algebraic_data_type">algebraic typing system</a> which can be manipulated by the programmer (much like <a href="http://www.haskell.org/haskellwiki/Haskell">Haskell</a>).  Carl sent me a link to this <a href="http://coursematerials.stanford.edu/courses/ee380/120229-ee380-300.asx">video</a> about how the language achieves this level of type inference and type manipulation.</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2012/12/14/great-julia-video/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
<enclosure url="http://coursematerials.stanford.edu/courses/ee380/120229-ee380-300.asx" length="128" type="video/asf" />
		</item>
		<item>
		<title>&#8220;Semantic Hashing&#8221;</title>
		<link>http://artent.net/2012/12/12/semantic-hashing/</link>
		<comments>http://artent.net/2012/12/12/semantic-hashing/#comments</comments>
		<pubDate>Wed, 12 Dec 2012 19:30:03 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Deep Belief Networks]]></category>
		<category><![CDATA[Graphical Models]]></category>
		<category><![CDATA[Languages]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=883</guid>
		<description><![CDATA[In &#8220;Semantic Hashing&#8220;, Salakhutdinov and Hinton (2007) show how to classify documents with binary vectors.  They combine deep learning and graphical models to assign each document a binary vector.  Similar documents can be found by using the L1 difference between the binary vectors.  Here is their abstract. We show how to learn a deep graphical model of [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>In &#8220;<a href="http://www.utstat.toronto.edu/~rsalakhu/papers/semantic_final.pdf">Semantic Hashing</a>&#8220;, Salakhutdinov and Hinton (2007) show how to classify documents with binary vectors.  They combine deep learning and graphical models to assign each document a binary vector.  Similar documents can be found by using the L1 difference between the binary vectors.  Here is their abstract.</p>
<blockquote><p>We show how to learn a deep graphical model of the word-count vectors obtained from a large set of documents. The values of the latent variables in the deepest layer are easy to infer and give a much better representation of each document than Latent Semantic Analysis. When the deepest layer is forced to use a small number of binary variables (e.g. 32), the graphical model performs “semantic hashing”: Documents are mapped to memory addresses in such away that semantically similar documents are located at nearby addresses. Documents similar to a query document can then be found by simply accessing all the addresses that differ by only a few bits from the address of the query document. This way of extending the efficiency of hash-coding to approximate matching is much faster than locality sensitive hashing, which is the fastest current method. By using semantic hashing to ﬁlter the documents given to TF-IDF, we achieve higher accuracy than applying TF-IDF to the entire document set.</p></blockquote>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2012/12/12/semantic-hashing/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>The Natural Language Toolkit Python Library</title>
		<link>http://artent.net/2012/11/16/the-natural-language-toolkit-python-library/</link>
		<comments>http://artent.net/2012/11/16/the-natural-language-toolkit-python-library/#comments</comments>
		<pubDate>Fri, 16 Nov 2012 12:48:12 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Languages]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=794</guid>
		<description><![CDATA[The NLTK Python Library contains a large number of packages for text manipulation and classification.  It includes routines for classification (maximum entropy, naive Bayes, support vector machines, an interface to the Weka library, expectation maximization, k-means, conditional random fields,&#8230;), text-manipulation, parsing, and graphics.]]></description>
				<content:encoded><![CDATA[<p>The <a href="http://nltk.org/">NLTK Python Library</a> contains a large number of packages for text manipulation and classification.  It includes routines for classification (maximum entropy, naive Bayes, support vector machines, an interface to the Weka library, expectation maximization, k-means, conditional random fields,&#8230;), text-manipulation, parsing, and graphics.</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2012/11/16/the-natural-language-toolkit-python-library/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Speed of Various Languages &#8211; Julia</title>
		<link>http://artent.net/2012/10/10/speed-of-various-languages/</link>
		<comments>http://artent.net/2012/10/10/speed-of-various-languages/#comments</comments>
		<pubDate>Thu, 11 Oct 2012 00:37:51 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Languages]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=585</guid>
		<description><![CDATA[About a year ago, I wrote a simple prime testing algorithm to test the speed of several languages.   I just added Julia (windows binary) to the list. Time Language 0.3 Julia 0.3 VB 6.0 Compiled 0.3 VC++ 6.0 0.4 Digital Mars C 0.5 GHC Haskell Compiled with -O2 flag 0.7 Netbeans 6.9 Java 0.8 VB [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>About a year ago, I wrote a simple prime testing algorithm to test the <a href="http://shootout.alioth.debian.org/">speed of several languages</a>.   I just added <a href="http://julialang.org/">Julia</a> (<a href="https://github.com/JuliaLang/julia/downloads">windows binary</a>) to the list.</p>
<pre style="line-height: 10pt;">Time Language

0.3  Julia
0.3  VB 6.0 Compiled
0.3  VC++ 6.0
0.4  Digital Mars C
0.5  GHC Haskell Compiled with -O2 flag
0.7  Netbeans 6.9 Java
0.8  VB 6.0 (Interpreted strong typed)
1.3  Mathematica 8 compiled with Compilation Target-&gt;”C” 
1.9  Matlab 7.10.0.499 (R2010a)
2.5  GHC Haskell Compiled
3.6  “Compiled” Mathematica 8 
3.7  QiII SBC
5.0  Python IDLE 2.6.4
6    1992 Turbo C
7    Compiled PLT Scheme
7    VB 6.0 (Interpreted no type info)
7    Excel VBA (Iterp)
9    Clojure (Clojure Box 1.2 with type coersion)
11   "Compiled" Mathematica 7
19   PLT Scheme
20   netbeans python
20   ruby 1.8.6 for Windows
25   QiII Clisp
40   Emacs lisp using Cygwin
117  Mathematica 7
131  Mathematica 8
185  GHC Haskell Interactive Mode</pre>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2012/10/10/speed-of-various-languages/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>Paper.js</title>
		<link>http://artent.net/2012/09/14/paper-js/</link>
		<comments>http://artent.net/2012/09/14/paper-js/#comments</comments>
		<pubDate>Fri, 14 Sep 2012 20:26:59 +0000</pubDate>
		<dc:creator><![CDATA[hundalhh]]></dc:creator>
				<category><![CDATA[Languages]]></category>

		<guid isPermaLink="false">http://162.243.213.31/?p=410</guid>
		<description><![CDATA[Carl sent me this link.  Check it out.  Fun! &#160;]]></description>
				<content:encoded><![CDATA[<p><a href="http://162.243.213.31/wp-content/uploads/2012/08/paperjs.png"><img class="alignnone size-full wp-image-411" title="paperjs" src="http://162.243.213.31/wp-content/uploads/2012/08/paperjs.png" alt="" width="914" height="565" /></a></p>
<p>Carl sent me this <a href="http://paperjs.org/">link</a>.  Check it out.  Fun!</p>
<p>&nbsp;</p>
]]></content:encoded>
			<wfw:commentRss>http://artent.net/2012/09/14/paper-js/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>

<!-- Performance optimized by W3 Total Cache. Learn more: http://www.w3-edge.com/wordpress-plugins/

 Served from: artent.net @ 2026-05-12 20:42:47 by W3 Total Cache -->