00:00:10.280
hello everybody uh my name is hongon I
00:00:13.080
came all the way from Rhode Island uh
00:00:15.719
and just as I'm about to get adjusted to
00:00:17.920
the jet lag the conference is ending so
00:00:20.160
I'm really really sad about that but
00:00:23.680
today we've had a lot of talk about
00:00:26.599
rators even in this conference in
00:00:28.800
previous conferences and I'm sure you've
00:00:30.920
seen some blog posts about it and people
00:00:34.040
have talked a lot about their benefits
00:00:36.000
like what do they do on paper what are
00:00:38.280
their limitations and things like that
00:00:40.640
um but today I'm going to talk about
00:00:41.879
rors from a slightly different
00:00:43.320
perspective not to make it just yet
00:00:45.440
another RoR talk I'm going to talk about
00:00:48.079
my experience with building real
00:00:50.960
projects in rators with
00:00:53.640
rators so specifically I I'm going to
00:00:56.359
share my experience with two projects
00:00:58.440
the first is a raw TCP HTTP server I
00:01:01.440
built with rators and as you come uh and
00:01:04.400
the second one is a slightly more
00:01:05.760
advanced rack compatible server that I
00:01:08.040
developed uh with rators and as you come
00:01:10.759
along with me for the ride I hope you're
00:01:12.520
going to learn some unfamiliar but
00:01:14.560
useful design patterns for Reliable
00:01:16.640
concurrency and lastly I hope that you
00:01:18.720
form some opinions about rators about
00:01:21.479
Ruby as a language and where we should
00:01:24.200
go as a
00:01:26.000
community but before I begin I'm going
00:01:28.159
to give you guys a thought experiment
00:01:31.159
imagine you are a preschool teacher and
00:01:34.200
you have a group of toddlers they're
00:01:36.439
very very well behaved toddlers so if
00:01:38.920
you tell them to do something they will
00:01:40.680
do what you told them to do but they are
00:01:44.000
toddlers so if you don't explicitly say
00:01:46.920
hey don't put your hand in the cookie
00:01:48.600
jar they're going to go ahead and put
00:01:50.040
their hand in the cookie jar if you
00:01:51.479
don't say don't burn yourself they're
00:01:52.960
going to burn
00:01:54.399
themselves now imagine you have to try
00:01:56.759
to have them share a single coloring
00:01:59.479
book
00:02:01.159
without them drawing over each other
00:02:02.840
because you know if Amy draws over
00:02:05.119
Brian's drawing Brian's going to start
00:02:06.759
crying and nobody really wants
00:02:08.920
that so what type of rules do you need
00:02:12.000
to set what are the set of rules you
00:02:13.920
need so that Amy doesn't draw draw over
00:02:16.120
Brian Brian doesn't draw over Charlie
00:02:17.920
and so on does anybody have an
00:02:25.239
idea how's this okay as a first to to
00:02:29.680
kick things off How about if somebody is
00:02:32.680
already drawing you have to wait for
00:02:34.640
that person to finish drawing before you
00:02:37.480
start drawing does that sound
00:02:43.840
reasonable assuming that they are very
00:02:45.920
well behaved Toddlers and they will
00:02:47.319
listen um okay any other rules that
00:02:49.760
people can think of stand in a line
00:02:53.040
stand in a line sure
00:03:01.000
oh that we're going to get to that later
00:03:02.840
but at the moment we're not going to
00:03:04.040
touch the coloring book okay so we have
00:03:06.840
we have those two rules very basic if
00:03:09.239
somebody's drawing wait for them to
00:03:10.799
finish drawing and stand in line okay
00:03:14.640
who thinks this is going to actually
00:03:15.959
work who do you if do you think the
00:03:18.200
coloring book will be always occupied
00:03:19.840
and nobody's going to draw over each
00:03:20.959
other can I get a raise of hands who
00:03:22.879
think this is a good
00:03:24.879
system raise of hands who thinks this is
00:03:28.319
a bad system
00:03:31.159
who thinks that's who's not entirely
00:03:33.680
sure okay so we got I think like 50% not
00:03:38.200
sure 40% bad system 10% good system so
00:03:40.840
we have a bit of a
00:03:42.439
distribution let's consider this
00:03:44.319
situation Amy starts drawing first and
00:03:47.720
as we decided Brian will start waiting
00:03:50.640
until Amy finishes drawing but remember
00:03:53.360
how we didn't say that Amy needs to
00:03:55.640
actually finish drawing when she starts
00:03:58.480
drawing so Amy decides to take a nap in
00:04:01.319
the middle of drawing and because we
00:04:03.159
told Brian that you have to wait until
00:04:04.760
Amy finishes drawing the the drawing
00:04:07.200
book is in front of Amy who is napping
00:04:09.840
and Brian just sits
00:04:11.680
idly and only when Amy wakes up like an
00:04:14.200
hour later Brian can actually start
00:04:18.239
drawing so this is not a very desirable
00:04:21.680
situation we don't want the kids to do
00:04:23.919
nothing for an hour now let's even
00:04:26.680
complicate it even more and add crayons
00:04:28.720
to the equation
00:04:30.639
now they have to share crayons and again
00:04:32.919
we set the rule that if you want a
00:04:34.479
crayon you should wait for the person
00:04:35.840
who has that crayon to finish
00:04:37.880
drawing and then you run into a
00:04:39.759
situation like this Amy starts drawing
00:04:41.840
with the blue crayon Brian starts
00:04:44.280
waiting on
00:04:45.320
Amy Charlie starts waiting on Brian with
00:04:48.400
a red
00:04:49.280
crayon but then Amy decides she actually
00:04:51.639
also wants the red crayon and Amy starts
00:04:53.520
waiting on
00:04:54.520
Charlie what goes wrong in this
00:04:58.080
situation it's a Deadlock Amy is waiting
00:05:00.639
on Charlie Charlie is waiting on Amy so
00:05:03.120
nobody can actually do
00:05:05.240
anything okay now switch toddlers with
00:05:09.039
threads and your dinosaur coloring book
00:05:12.039
with memory and this is a rough picture
00:05:15.680
of what programming with threads is like
00:05:18.759
threads are free to sleep whenever they
00:05:22.080
will wait for whoever they will share
00:05:25.720
whatever unless you explicitly tell them
00:05:28.400
you cannot do this no you cannot sleep
00:05:30.759
while you're drawing no you cannot draw
00:05:32.759
while other person is drawing you have
00:05:34.319
to set these very very explicit rules
00:05:36.240
for them to not do
00:05:38.440
something
00:05:39.960
and with what many programmers working
00:05:42.880
with concurrency have found out over the
00:05:44.880
years is that this is kind of not a
00:05:47.080
great way to do things generally it's
00:05:48.880
easy to forget a rule or two and
00:05:50.759
something goes terribly wrong in
00:05:53.039
production um and this idea that threads
00:05:56.080
are not necessarily a good tool is not
00:05:59.240
something knew
00:06:01.199
either this is a slide from uh John
00:06:04.919
ousterhout's 1995 presentation and you
00:06:07.720
can say you can tell how old it is
00:06:09.319
because it says Sun Microsystems and not
00:06:12.120
Oracle uh and if you have worked with
00:06:15.199
distribut the systems you might also be
00:06:16.800
familiar with John as's more recent work
00:06:19.440
called The Raft protocol so the point I
00:06:22.160
want to get across is that John asout is
00:06:24.080
one of the pioneers of concurrency and
00:06:26.240
still is a very very major player in the
00:06:28.280
concurrency game and he a veteran with
00:06:30.280
programming with threads and this is
00:06:32.720
what he had to say about programming
00:06:34.280
with threads essentially you have to be
00:06:36.800
a wizard to program with threads
00:06:39.919
correctly and if that wasn't convincing
00:06:42.919
enough here's a quote from mat himself
00:06:45.639
from a 2019 interview I ALS I also
00:06:48.639
regret adding threats to the language we
00:06:50.080
should have had a better concurrency
00:06:52.000
abstraction so hopefully you kind of see
00:06:55.440
where threads are slightly dangerous
00:06:58.759
feature for
00:07:00.960
concurrency now we're going to go back
00:07:03.080
to actually what one of the uh people in
00:07:05.039
the audience seat mentioned before and
00:07:07.879
how about we split the coloring book in
00:07:10.639
three and share it among Amy Brian and
00:07:12.840
Charlie same thing with the crayons if
00:07:14.919
you have 12 crayons we're going to give
00:07:16.680
four to Amy four to uh Brian four to
00:07:19.039
Charlie something like
00:07:20.840
that and if you need something that
00:07:24.440
another child has you will go and ask
00:07:26.720
for it and that person will give it to
00:07:28.840
you
00:07:30.319
notice how we are sharing less but
00:07:33.080
there's a bit more structure to the
00:07:34.759
organization
00:07:36.479
here and for this to work we just have
00:07:38.919
three very very simple rules first is
00:07:42.720
you don't touch other people's stuff
00:07:45.039
second is if you need something you ask
00:07:46.960
the person who has it and the last is if
00:07:49.759
somebody asks for something you need to
00:07:51.960
respond to it so the last one is
00:07:53.840
basically if you were napping before and
00:07:55.720
Brian asks you for a coloring pen you
00:07:57.840
have to wake up and ask and respond to
00:08:00.080
Brian's
00:08:01.639
order and this is the exact principles
00:08:04.800
on which actors and as an extension rors
00:08:07.199
are built upon you don't share memory so
00:08:09.720
you don't touch others uh coloring books
00:08:12.639
you send messages when you need
00:08:14.199
something and you receive messages and
00:08:16.479
respond to
00:08:17.680
them and because of these constraints
00:08:20.199
you have a much more easier you have a
00:08:22.159
framework for easier reasoning about
00:08:25.879
concurrency okay but who here is
00:08:27.919
building a crayon coloring book sharing
00:08:31.240
service who here is building a Fibonacci
00:08:34.000
function in
00:08:35.599
production I hope nobody um I hope you
00:08:38.880
didn't put a Fibonacci function in your
00:08:40.240
rails app so I'm going to we're going to
00:08:42.760
build something real we're going to
00:08:44.839
build something that's actually
00:08:46.600
potentially
00:08:49.560
usable uh and the first thing we're
00:08:52.160
going to build is based off of Mike
00:08:54.279
Perham or the creator of Sidekicks blog
00:08:56.440
where he built a health check service
00:08:58.360
for sidekick Enterprise
00:09:00.360
um the specific gem that he uses in this
00:09:03.040
case is a gem called GS server which is
00:09:04.959
a very very archaic X standard Library
00:09:08.560
gem that offers some very basic TCP and
00:09:11.279
HTTP
00:09:13.000
functionality um and I hope that this
00:09:16.200
sidekick Enterprise functionality is
00:09:18.839
pretty real enough for you where you can
00:09:20.640
see yourself building something like
00:09:22.040
this maybe not as like a customer facing
00:09:24.240
tool but as an internal tool for
00:09:25.600
monitoring things like
00:09:27.440
that so gserver follow is a pry simple
00:09:31.320
architecture uh it's threaded and for
00:09:33.800
each incoming connection you spawn a new
00:09:36.480
thread and that thread will run whatever
00:09:39.320
code you provided it um in the check in
00:09:41.680
the case of a health check if it
00:09:43.160
responds it returns to 200 if it doesn't
00:09:45.480
it returns a
00:09:47.360
404 and thanks to the fairly simple
00:09:49.880
architecture this
00:09:52.760
is uh this is pretty much immediately
00:09:55.959
convertible to
00:09:57.440
rectors where uh we instead of using
00:10:00.279
threads we will just spawn a bunch of
00:10:01.800
rors or in the specific implementation
00:10:04.160
that we have we're going to have a pool
00:10:05.519
of rors and have them pick up each
00:10:07.560
connection as it comes
00:10:09.320
in so this is the uh slightly truncated
00:10:13.160
version of the code for the main Rector
00:10:15.519
you can see it's pretty straightforward
00:10:17.480
you just have uh you have a loop where
00:10:21.399
the main Rector just accepts clients
00:10:24.160
from a uh from a TCP
00:10:26.880
port and it yields that that client so
00:10:29.880
yielding is a form of send passing
00:10:31.839
messages to another
00:10:33.440
RoR uh and the rest of the code below is
00:10:36.560
just stopping the rors um you can see
00:10:39.040
that the main RoR collects the uh work
00:10:41.839
of rors after it
00:10:43.680
terminates and here's the code for the
00:10:45.600
worker this is even simpler where it
00:10:47.560
just says you get a message and if the
00:10:49.760
message is is not a terminate message
00:10:52.200
you will you assume that it's a client
00:10:55.360
object and you just run whatever code
00:10:57.920
the uh person who ask you to run the
00:10:59.920
code in and rest is just error rescuing
00:11:03.320
like if you have an error you don't want
00:11:04.360
it to hang you want it to actually exit
00:11:05.959
and things like
00:11:07.360
that so the reimplementation of G server
00:11:11.040
with rors is a few hundred lines of code
00:11:13.800
um it's you can actually check it out on
00:11:16.120
GitHub through the link below um it
00:11:18.920
works it fully functions I even provide
00:11:20.600
a sample health check that is exactly
00:11:22.920
the same as the sidekick Enterprise
00:11:24.480
health check uh
00:11:27.200
service um but so we already have at
00:11:30.839
this point just a few hundred lines of
00:11:32.600
code we already have a pretty functional
00:11:35.920
rapor library but this is too easy for
00:11:39.240
us we want to do something even more
00:11:40.959
advanced we want to really push its
00:11:42.519
limits and certain some limits that come
00:11:44.880
to mind with G server is that it works
00:11:46.839
with raw HTTP and TCP meaning that uh
00:11:50.639
you basically just get an IO stream and
00:11:52.800
a bunch of text a large string from that
00:11:55.600
which is not very useful like your rails
00:11:57.320
app cannot use a raw TCP connection
00:11:59.240
ction it uses something instead called a
00:12:01.480
rack environment it's also vulnerable to
00:12:04.279
a slow L attack which is when you have a
00:12:06.839
client that sends really really slow
00:12:09.279
data uh and that sat saturates however
00:12:12.200
many workers you have um and that means
00:12:15.399
all because all your workers are waiting
00:12:17.360
for this slow client to pass data it
00:12:19.880
can't take on any new
00:12:21.320
clients technically we could just tell
00:12:24.320
people to use engine X like unicorn and
00:12:26.600
Pitchfork does um but because that's a
00:12:29.279
again because we really want to push the
00:12:31.160
limits of rors we're not going to do
00:12:32.519
that we're going to handle that
00:12:36.360
ourselves and for this I looked to Puma
00:12:39.480
which actually does have some level of
00:12:41.720
handling of slow clients so this is a
00:12:44.160
very well not a very simplified but a
00:12:46.160
somewhat simplified diagram of how Puma
00:12:48.120
Works where Puma has a dedicated
00:12:50.720
receiver thread that takes
00:12:52.560
connections and when each connection
00:12:55.440
finishes writing a request Puma will
00:12:57.880
send that request over to a worker um or
00:13:01.000
specifically puts it on work Q where
00:13:03.160
then a worker thread comes and picks it
00:13:05.639
up um and Puma also handles transforming
00:13:09.160
the HTTP request into a rack environment
00:13:11.760
which can then be consumed by rails
00:13:13.800
Sinatra Hanami whatever um modern web
00:13:16.639
app uh framework you're
00:13:19.279
using so Puma has obviously a lot more
00:13:22.440
things going on than G server but when
00:13:24.880
when you boil it down to the stuff that
00:13:26.360
matters it's really three things that it
00:13:28.600
does first it handles connections
00:13:31.279
asynchronously that means if you have a
00:13:33.519
connection that's not sending any data
00:13:35.440
you're not going to be waiting for that
00:13:36.760
to send any data instead you're going to
00:13:38.600
be working on other stuff and then when
00:13:40.000
it does send data then you go back to it
00:13:42.120
and actually work uh handle the data
00:13:44.560
that's uh handle the new data secondly
00:13:48.120
it transforms uh HTTP requests into a
00:13:51.399
rack environment which is some rather uh
00:13:54.920
convoluted parsing because the HTTP
00:13:57.440
specification has just so many edge
00:13:59.680
cases and lastly it has some mechanism
00:14:02.639
for scheduling workers that is once a
00:14:05.399
request is processed and a response is
00:14:07.720
sent back uh response is ready it will
00:14:10.000
take that response and send it back out
00:14:11.519
through the
00:14:12.639
connection for the first two tasks um we
00:14:16.120
are going to use a gem actually called H
00:14:19.519
async HTTP this is the gem that is
00:14:22.199
actually the backbone of the Falcon web
00:14:24.639
server so the Falcon web server is at a
00:14:27.240
very high level just a wrapper around
00:14:28.680
async HTTP also developed by socketry um
00:14:32.759
and this is where we kind of meet our
00:14:35.040
very first really big
00:14:37.880
challenge uh yep so async HTTP will
00:14:41.800
handle the asynchronous io operations
00:14:43.959
and it'll also convert our string to a
00:14:46.120
rack environment which is a lot of
00:14:47.519
really really complicated
00:14:50.160
parsing um but as soon as you heard if
00:14:52.839
you've work with vors as soon as you've
00:14:54.399
heard we're going to use this gem a
00:14:56.279
thought might have popped up gems
00:14:58.160
usually aren't usable with rators as in
00:15:01.600
a lot of gems are not compatible with
00:15:03.959
rors um who here has worked with uh
00:15:07.160
threaded
00:15:08.759
code who here has run into thread unsafe
00:15:12.360
gems while working with threaded
00:15:15.360
code yeah so th once you introduce
00:15:19.120
threads into your library then that
00:15:20.880
means you kind of need to consider a lot
00:15:22.800
of gems that weren't built with threads
00:15:25.040
in mind May display unexpected behavior
00:15:28.800
and and for rators unfortunately this is
00:15:31.839
magnified uh tfold I'd have to say where
00:15:35.560
nobody really considers the existence of
00:15:37.880
rators when building their gems so it's
00:15:39.839
up to you or in this case me to make the
00:15:43.240
gem compatible with
00:15:45.360
rators and the specific part where
00:15:47.639
there's a bit of friction is async HTTP
00:15:50.360
is threat safe it is fiber safe but it
00:15:52.279
is not RoR safe and specifically the RoR
00:15:54.759
not safe unsafe part comes in the shape
00:15:56.720
of a custom console that B in HTTP uses
00:16:01.160
um and this console is a Singleton and
00:16:03.839
furthermore it is a Singleton that has a
00:16:06.040
hold of IO and IO fundamentally cannot
00:16:10.279
be passed as messages to recors even
00:16:12.800
when
00:16:13.959
frozen so I had to think of a way to
00:16:16.440
kind of work around
00:16:18.199
it and my idea was using some uh like a
00:16:21.759
distributed computing con concept called
00:16:23.759
remote procedure calls so the idea is
00:16:27.120
you want to make a Lo what would have
00:16:30.440
been a local function call instead to
00:16:32.759
some sort of remote computer or in this
00:16:35.160
case a different processor and that
00:16:37.600
processor that has the actual uh
00:16:39.800
resources to calculate that will
00:16:41.680
calculate it or do the operation and
00:16:43.600
then send back the results to
00:16:45.720
you um so the basic idea is we want the
00:16:49.560
above code to become the code below
00:16:52.560
instead of calling loggers directly you
00:16:54.560
want to send a message to a loger Rector
00:16:57.480
and the loger Rector who has sole
00:16:59.720
ownership of the logger is then going to
00:17:01.959
print it
00:17:03.880
out for this uh because I didn't
00:17:06.799
actually want to change all the code I
00:17:08.280
used refinements where using refinements
00:17:11.919
uh that are that I only included in the
00:17:13.919
worker rectors which are the rors who
00:17:15.679
actually need access to the logger I was
00:17:18.160
able to stub out every single local call
00:17:21.520
like every single non ractor call to the
00:17:24.439
logger or in this case the console and
00:17:28.720
uh instead stub it out with a message
00:17:31.360
send to the logger Rector um the rapor
00:17:34.200
that current uh that part is the rapor
00:17:37.760
local storage so when I initiate a rapor
00:17:40.480
I give each Rector the address of the
00:17:42.240
loggo Rector and at runtime it'll access
00:17:45.600
that and check out okay which Rector do
00:17:48.440
I actually send my RPC request
00:17:51.120
to so in the end we kind of have an
00:17:54.039
architecture like this where you have
00:17:56.360
connections you have a receiver that
00:17:58.880
buffers those connections changes them
00:18:01.039
into a rack environment passes them to
00:18:02.720
worker rors and then you kind of have a
00:18:04.720
log aactor at the side for handling um
00:18:07.360
logging IO
00:18:08.840
operations this is not too far off from
00:18:11.679
puma and lastly we have to handle
00:18:14.520
scheduling
00:18:15.919
so for scheduling we have multiple
00:18:18.720
connections and we have fibers that are
00:18:20.960
present in the receiver
00:18:23.760
Rector when a connection sends some data
00:18:27.159
a fiber is created and and that fiber
00:18:30.240
will buffer the request it will read
00:18:31.880
whatever is available on the
00:18:34.120
connection create a partial request
00:18:37.280
object and say that it actually sends it
00:18:40.000
in chunks and it doesn't send the entire
00:18:41.760
request in one go then it'll wait for
00:18:44.360
another connection and say that in this
00:18:46.120
case we have a different connection that
00:18:47.720
sends some additional data and then it
00:18:50.280
goes to
00:18:51.200
sleep once the first connection is ready
00:18:53.799
again the first fiber is woken up once
00:18:55.919
more and then it actually completes the
00:18:57.799
request object
00:18:59.240
we pass that request object or rack
00:19:01.240
environment to the worker
00:19:03.039
pool and then same thing with the second
00:19:05.320
connection where once all the data is
00:19:07.200
received we package that into a rack
00:19:08.799
environment and pass it on to the worker
00:19:10.440
pool and notice how because everything
00:19:12.760
is running in parallel in the meantime
00:19:14.320
we've had the worker the worker Rector
00:19:16.600
actually process that request into
00:19:20.240
response and now the rather difficult
00:19:24.159
part comes in where we need to send that
00:19:25.960
response back to the first fiber notice
00:19:28.360
that we we need to do this because R the
00:19:31.480
worker rors actually don't have the
00:19:33.640
connection objects themselves only the
00:19:35.640
receiver reactor does so the worker
00:19:37.240
Rector needs to send back the response
00:19:39.520
object to the worker uh to the receiver
00:19:41.840
Rector and specifically to the fiber
00:19:43.640
from which it got the request
00:19:48.480
from so okay how do we wake up a fiber
00:19:51.159
then there are at large three classes of
00:19:55.200
operations that actually wake up a fiber
00:19:56.919
the first being some IO event we seen
00:19:59.039
that where once a connection actually
00:20:01.400
has new data a fiber will wake up uh
00:20:03.600
when a connection doesn't have any new
00:20:04.840
data the fiber will go to sleep the
00:20:06.840
second is some sort of thread operation
00:20:09.280
so specifically a thread joining causes
00:20:12.640
uh a Rector uh causes a fiber wake up
00:20:15.120
event similarly we have the mutex unlock
00:20:17.720
which is uh also wakes up the
00:20:20.840
fiber uh but notice how here's that
00:20:23.559
there's no reactor operations here so
00:20:25.840
currently there's no way for rectors to
00:20:27.520
actually wake up a fiber
00:20:29.360
so instead we're going to cheese a
00:20:30.919
little bit and then and create a rapor
00:20:34.640
uh create a blocking rapor do a blocking
00:20:37.080
Rector operation inside a thread so the
00:20:39.159
idea is that once you take once the RoR
00:20:42.520
take actually finishes the thread will
00:20:44.840
then join and because of thread joins
00:20:47.200
that will trigger a fiber
00:20:50.200
event and here comes the second part of
00:20:53.880
the
00:20:54.559
talk making rors usable
00:20:59.159
so the previous idea I had about the G
00:21:03.559
server replacement works
00:21:05.240
perfectly but the rack compatible server
00:21:09.120
actually kind of doesn't work but I'm
00:21:12.279
going to say it's not super my
00:21:15.039
fault because there are some unstable
00:21:17.559
apis in the unstable apis for rators and
00:21:20.760
there are also some dangerous bugs that
00:21:22.799
I'm quite uncomfortable with the first
00:21:25.919
critical real blocker is that the
00:21:28.559
interaction between threads and rors are
00:21:30.360
not very well defined um specifically
00:21:32.520
there are some definitions for what
00:21:34.480
happens to threads inside rectors but
00:21:37.039
it's not entirely clear what happens to
00:21:38.960
rators inside of threads so this will
00:21:41.679
actually not trigger a Fiverr wakeup
00:21:44.320
event despite being a thread joint
00:21:46.559
event meaning that we once we have the
00:21:49.360
response object available we actually
00:21:51.000
can't send that back to the fiber
00:21:52.640
because the fiber will never know that
00:21:54.360
that response object was sent because it
00:21:56.400
doesn't detect it
00:21:59.320
um there is a pull request on it uh not
00:22:01.360
a pull request but there is a issue
00:22:02.679
report on um ruby gems on the Ruby
00:22:05.159
tracker board but this does require a
00:22:07.480
bit more discussion about the API and
00:22:08.960
things like
00:22:10.520
that but here's a more concrete and
00:22:14.000
arguably a current issue so this is my
00:22:17.679
logging for uh when a RoR is initialized
00:22:21.320
you can see that uh it's pretty
00:22:22.919
straightforward it just s sends the r uh
00:22:25.679
worker rors names and says oh it was
00:22:28.080
initialized
00:22:29.440
but you might also notice that I have a
00:22:30.840
freeze there that's because rectors need
00:22:34.799
to have all objects Frozen if it's being
00:22:36.880
sent uh especially if it's being moved
00:22:39.760
but it's also supposed to automatically
00:22:42.240
freeze things for you if they are not
00:22:44.039
frozen and if it's unfreezable then it
00:22:46.080
will throw an
00:22:47.039
error but if you actually get rid of the
00:22:49.080
freeze statement there um instead of
00:22:51.400
having the nice initial uh worker
00:22:53.760
reactor initialized message you're going
00:22:55.640
to get something like this um so you see
00:22:58.840
the boxes with the question marks there
00:23:00.799
and you also see the
00:23:02.440
initialized never actually printed so
00:23:04.919
here what I suspect is actually
00:23:06.080
happening is some sort of memory bug
00:23:08.520
where whatever is not uh because there's
00:23:11.600
some string uh interpolation going on
00:23:14.320
whatever string is actually not being
00:23:16.480
properly Frozen and you have some sort
00:23:18.480
of memory leak going on so we have a
00:23:21.840
weird situation where a message based
00:23:24.080
concurrency framework doesn't have
00:23:27.559
cannot NE necessarily correctly send
00:23:29.640
messages and this I find to be a very
00:23:32.120
detriment um to actually using
00:23:36.480
rors so here I briefly want to say that
00:23:41.840
rators uh I think there's a lot of
00:23:43.679
things planned for rors a lot of really
00:23:45.679
interesting features but unfortunately
00:23:49.120
it Frameworks with incomplete features
00:23:51.760
are usable in a limited context but
00:23:53.960
Frameworks with incorrect features are
00:23:56.080
really difficult to use because you
00:23:57.440
never know when things are going going
00:23:58.600
to go wrong like I would never be able
00:24:00.559
to put Raptor in Productions because who
00:24:03.159
knows when I might get some memory
00:24:04.640
corruption and G God forsake uh some
00:24:07.840
customer data gets leaked that's you
00:24:09.600
know not something you want that's no
00:24:11.840
matter how fast the performance Ben is
00:24:13.559
how much the performance Med it's that's
00:24:15.080
just something I cannot trade
00:24:17.480
off but it's easy to you know say oh you
00:24:21.000
know M and kuichi should work a 100
00:24:23.559
hours a week to fix all this and you
00:24:25.200
know everything will be
00:24:26.559
nice that's really easy for us to say um
00:24:30.600
but I think the more difficult question
00:24:31.840
is so how can we help out what can we do
00:24:35.840
to help Advanced directors how to help
00:24:38.120
Advance Ruby how to help Advance these
00:24:40.440
new experimental and arguably exciting
00:24:42.640
features in
00:24:44.279
Ruby um but but before anything why do
00:24:47.799
we need to help at
00:24:49.720
all and the Ugly Truth is each year I
00:24:53.720
find it harder to explain to my friends
00:24:55.880
and colleagues why they should try out
00:24:57.640
Ruby
00:24:59.320
uh you're building web apps just use
00:25:01.200
JavaScript or if you insist on using a
00:25:03.200
monolith use
00:25:04.840
C oret or whatever alternative there is
00:25:08.720
looking for a simple scripting language
00:25:10.640
go with python where you just have a
00:25:12.159
much much larger
00:25:14.559
audience the truth is that the Ruby
00:25:16.559
Community must face that Ruby is not as
00:25:19.120
exciting as it was once uh as it was in
00:25:22.399
the
00:25:23.880
past and so I looked at some fastest
00:25:26.960
some of the fastest growing languages is
00:25:28.679
in the area in the Modern Age and
00:25:31.039
something that came to mind is go which
00:25:33.320
as you can see overtook Ruby as the 10th
00:25:35.440
place in
00:25:37.279
20122 Elixir which we had a ex lovely
00:25:40.520
talk from Jose about and rust and I
00:25:44.360
thought about what these languages have
00:25:46.360
in common uh I think for go it's uh for
00:25:49.279
alixir it's very obvious where any code
00:25:51.240
you write is automatically concurrent
00:25:52.960
automatically parallelizable with go
00:25:55.399
even though it's not automatically
00:25:56.840
parallelizable they just provide you so
00:25:58.480
many Primitives to make it concurrent to
00:26:00.279
make it scalable to make it
00:26:02.039
parallelizable and lastly for rust um
00:26:04.919
this might be rust obviously has a lot
00:26:07.120
of new and kooky ideas but I think
00:26:10.679
concurrency wise it also provides a new
00:26:13.279
way to do concurrency at a very very low
00:26:15.200
level doing concurrency in C is is just
00:26:19.039
it's not a very pleasant experience to
00:26:20.679
say the least it's in in fact it's
00:26:22.120
almost impossible to build any large SE
00:26:24.919
program
00:26:26.960
concurrently so what I want to get
00:26:29.480
across here as the point is that people
00:26:31.480
care about concurrency now that's a
00:26:33.480
major factor in people making decisions
00:26:35.159
about which language they're going to
00:26:36.279
learn next which language they're going
00:26:37.480
to deploy
00:26:38.640
next and while scripting languages might
00:26:42.399
not be exciting I think that a scripting
00:26:44.520
language with an easy concurrency and
00:26:46.559
parallelism way uh framework that's
00:26:49.520
pretty exciting because python doesn't
00:26:51.200
have
00:26:52.159
that bash doesn't have that no real
00:26:55.000
scripting language that's used commonly
00:26:57.159
has such an easy way way to do
00:26:58.880
concurrency and Par especially I would
00:27:01.880
argue very basically no language no
00:27:04.120
scripting language has an easy way to do
00:27:06.559
parallelism so I believe that maybe this
00:27:08.960
is the Breakthrough that Ruby is looking
00:27:10.600
for to come back into the main stage
00:27:12.679
once more and say hey we're actually
00:27:14.520
doing something exciting we're actually
00:27:15.840
doing something that no other scripting
00:27:17.360
language can offer and that's why you
00:27:19.279
need to use Ruby if you want to build
00:27:21.120
apps fast and if you want to build
00:27:22.559
concurrent and parallel apps maybe
00:27:25.039
that's our cell
00:27:29.279
so then how should you use rectors and I
00:27:34.200
say you should use rectors pretty
00:27:36.360
bravely like it's I I've shown you how
00:27:39.360
we can build a fairly reasonably sized
00:27:41.960
project with rors sure there's some
00:27:44.240
challenges sure it's uncomfortable sure
00:27:46.200
you're going to run into some bugs but
00:27:48.320
it's buildable it's it's the code is
00:27:50.720
literally there and the fact that people
00:27:54.320
aren't trying it out or just trying
00:27:56.559
maybe 10 15 lines of rors and saying oh
00:27:58.840
like I can't just swap out threads for
00:28:00.480
rators and have it work I feel like
00:28:02.240
that's kind of blocking people from
00:28:04.640
really exploring the full potential of
00:28:06.480
rors and where this can go so here's my
00:28:10.039
ask to you all the Ruby Community I ask
00:28:12.720
that you try out rors yourself not on
00:28:15.080
some toy program but on something that
00:28:17.039
you might actually be interested in
00:28:18.559
trying out maybe it's a new framework
00:28:20.279
maybe you want to learn what the rack
00:28:21.640
specification does maybe you want to
00:28:22.919
learn parsing try it out on something
00:28:25.159
that you want to learn as a hobby
00:28:27.039
project and tell tell us share with the
00:28:29.240
community what works what doesn't work
00:28:32.600
what should work but doesn't work and
00:28:35.200
most exciting what doesn't what
00:28:36.720
shouldn't work but actually does work
00:28:39.279
and I think this is the way to find out
00:28:41.039
what are rors actually good for what are
00:28:43.240
they capable of and where should we
00:28:44.720
drive this new feature that's
00:28:47.039
potentially something that no language
00:28:49.360
has ever really done before in the right
00:28:51.960
direction um yeah so I hope that uh my
00:28:56.240
little hobby project is some inspiration
00:28:58.559
for you all to try out doctors on your
00:29:01.039
own time and check out what's what's in
00:29:04.399
line for uh for the next exciting Ruby
00:29:06.799
feature thank you