Crystal: First Contact

June 23, 2019

Categories: Technical Tags: PL Network Crystal

Crystal is a Ruby inspried language, except with a great type system and Lisp-like AST macros. I don't really know Ruby that well because I was never a fan, it was like a fusion of the less elegant parts of Smalltalk and the less practical parts of Perl that also ran terribly slowly. Rails eventually became a great hit and carried it to relevance, but I wasn't really interested in those aspects either.

However now I am in the mood for writing some network facing code and to understand concurrency better for a change. My language shortlist looks like: Erlang/Elixir, Golang, Pony and Crystal. The first two has mature runtimes for (M:N) green threads, with difference being the former uses actor model with message passing where the latter uses channels for communication. Pony seems to have lost the early hype, but it promises a type system on top of actor model to rule out data race and race conditions statically.

Crystal is a bit different (and simpler). It's actually single-threaded like Node.js and achieves concurrency through pervasive use of asynchronous or non-blocking I/O API. But unlike Node.js, crystal doesn't suffer from callback hell, and it almost reads like synchronous code because of its seamless use of Fibers (or coroutines really). So Crystal's concurrency is "co-operative" as opposed to "pre-emptive". The so called async/await syntax isn't novel either. C# got it in version 5, python has it from 3.5(?), and even JS now has it (but built on promises rather than coroutines I believe). But I think their syntax is still not very intuitive (and for python, asyncio still feels like a leaky abstraction to me). Last I checked, Rust people were still bikeshedding over it. Somehow Crystal feels the cleanest to me, but then it does have macros.

Here is the proverbial 'hello world' of concurrency (chat server using socket) that I could whip up within an hour of knowing Crystal:

require "socket"

class Server  
  @@clients = [] of Tuple(String, TCPSocket)

  def self.handle_client(client)
    client.puts("Server: What's your name?")
    name = client.read_line
    puts "#{name} has connected"
    client.puts("Server: Hello #{name}! Welcome to the chat!")
    identity = {name, client}
    @@clients << identity

    client.each_line do |msg|
      @@clients.each do |n, c|
        if !(n == name)
          c.puts("#{name}: #{msg}")

    puts "#{name} has disconnected"

  def self.main_loop
    server ="", 4444)
    loop do
      client = server.accept
      spawn self.handle_client(client)



So yeah, each client gets their own Fibers, which is far lighter than an OS thread (and Fibers can talk to each other using channels). I think some of the flaks Python's GIL gets is undeserved, but you probably wouldn't dare to serve thousands of clients using python threads whereas coroutines wouldn't break sweat.

That said, Crystal wouldn't be appropriate for you if you need multi-threading. The language designers have some tough hurdles to overcome (like how to redesign memory model and GC) if they ever want to introduce that. But for me, and for a lot of people going by its traction, it doesn't really matter. Clean syntax, advanced type system with inference and LLVM generated fast native code is a great deal. So, I am gonna treat Single-threadedness as even a feature, no race condition or data race by design! Fearless concurrency! Right now I am using Kemal as my webframework (although there are others). It says it's inspired by Sinatra, I don't know what that is, but as a or flask user I find myself in familiar grounds.

As an addendum, here is a very simple SSH Tarpit because ever since reading about it I often wondered how effective it is:

require "socket"
require "random"

r =

def self.handle_client(r, client)
  name = client.remote_address
  puts "\a"
  puts "#{name} caught at #{Time.utc}"
  loop do
    sleep 10
      puts "#{name} escaped at #{Time.utc}"

server ="", 22)
loop do
  client = server.accept
  spawn handle_client(r, client)

I ran it both at home and on a reputed cloud providing service for some hours. It's seems bot activity is far higher in the cloud, perhaps because their IP blocks are known and developers are the ones who even use SSH. While the program works, almost all the bots seem to have timeout configured against these sort of schemes, I did still catch a few though :)