00:00:10.200
so today the topic is acing cruby this is a topic that is very very dear to my
00:00:15.559
heart and I'm very excited to tell you everything I know about acing kby couple slides about me my name is Bruno suich
00:00:22.400
and I consider myself to be acing cruby early adopter you can find my contact info on my web page
00:00:28.320
bruno.com and that is my GitHub handled as far as work go I created U my own web
00:00:34.600
app called link.com which is a modern broken link Checker uh uh and a lot of the work that
00:00:40.520
I did with as kruby I worked on that project right now my uh kind of work in progress uh project is rails billing uh
00:00:48.039
it's about fast stripe subscription integration with Ruby on Rails and do feel free to ask me about it you know
00:00:54.680
after the speech as far as my asyn Ruby involvement uh I can consider myself to
00:01:00.519
be working like pretty much uh two years full-time on asyn Ruby as part of a link
00:01:06.320
project and I what I did was implemented like a sophisticated web crawler that is
00:01:11.640
based uh on Asing technology I uh worked on a couple uh async only gems and async
00:01:18.280
limitter is like the only one that I uh kind of made public and uh I did made
00:01:23.960
one uh Ruby core uh commit uh which is about making DNS requests
00:01:33.280
asynchronous all right let's dive into the topic what is async Ruby like the
00:01:39.159
definition I have to go through it is like a fiber based concurrency for Ruby
00:01:44.600
if we were just chatting I would say look for 90% of the cases it means you
00:01:49.640
want to run many many HTTP requests at once okay a couple other lines that are
00:01:55.439
usually associated with async programming Paradigm because like async program in does exist in other languages
00:02:02.200
right so um async is like a modern and lightweight I/O concurrency it usually
00:02:08.280
contains like a brain off of the operation which is called event Loop or reactor and um this is totally
00:02:15.080
unofficial but this technology kind of gained a mainstream popularity uh 2010
00:02:20.680
and later and it was the the popularity was driven by the nodejs and the rise of
00:02:26.040
J modern JavaScript okay so everything that I'm going to show you today and like pretty
00:02:31.440
much the whole ecosystem has been driven maintained and developed by the Ruby C
00:02:36.640
te member which is called Samuel Williams right he's not like a big star but he's the guy that really like
00:02:42.440
implemented everything that I'm going to talk today if you see him buy him a beer okay so where does in like in the Ruby
00:02:48.920
landscape of the features where does ASN cruby fit in right it's a concurrency model so there are a couple other
00:02:55.400
concurrency models that Ruby has first off is multiple processes if you want to run multiple things at the same time you
00:03:01.959
spin multiple ruby processes there you go next up the new technology is the
00:03:07.080
rectors and it's been driven by the operating system threads okay so you have one Ruby process it runs on
00:03:13.840
multiple CPUs there you go uh next up we have threads it's an old technology uh
00:03:20.440
also the Primitive for it is operating system threads and as we all know the limitation of Ruby threads is the global
00:03:27.360
interpreter lock okay so you can have only one thread running at the same time
00:03:33.879
this same limitation the global interpreter lock also applies to Asing cruby okay and the the differentiation
00:03:41.439
is the core primitive of acing cruby is fibers I want to do a hands up so who
00:03:49.360
has kind of have any experience with like running multiple
00:03:54.400
processes okay a lot of people okay how about reactors
00:04:00.319
no one how about threads so you have written threaded code okay about half oh
00:04:06.560
hey did did anyone play with fibers on Asing Ruby okay they're people okay all right
00:04:14.840
so what acing gruby is not okay so this is just to disambig disambiguate a
00:04:20.160
couple of things so active record has this cool feature really is a good feature which is like called Low asnc
00:04:27.199
and there is a plethora of uh methods that have async in the name that feature
00:04:33.680
uses threads in the background so it's not it's not like fiber based it's
00:04:39.320
technically in uh English language it does run asynchronously but when we talk about programming context this is not
00:04:46.240
async Ruby same with sidekick it has the the perform async method which and we all know that sidekick runs multiple
00:04:53.360
shreds it's a threaded program uh or a library so uh you know that that's not
00:04:58.400
really acnr that's not topic of the speech today this slide shows like the main
00:05:06.039
components of what is considered to be asyn Ruby the Red parts are uh features
00:05:11.560
and technologies that are kept maintained and implemented in C Ruby and the white uh stuff is outside C Ruby
00:05:19.080
that is just gems let's start with async the gem you just do gem install async and there you
00:05:25.919
have it um it's not just any gem right so uh Matt's invited this uh gem to a
00:05:32.919
standard library and I think the invitation is still pending and I do have to ask Matt what's going on with
00:05:38.160
that uh okay so the gem was implemented uh by Samuel Williams the guy that pretty much worked on all these features
00:05:44.720
that you see on your screen right now and you know when we talk about you know uh work with async this is like the user
00:05:52.000
facing Library this is what you and I are interacting with the most next up is the IO event gem uh you
00:06:00.400
you can do gem install ient but you usually do not do that because this gem is really really low level it is
00:06:07.080
implemented as a c extension and it implements technologies that are that with names such as KQ eepo and IO uring
00:06:16.199
okay uh this is uh the main where the main logic where the fiber scheduler is
00:06:22.720
implemented it is low level next up we're going into the uh C
00:06:29.199
Ruby source code and we have the fiber scheduler interface fiber scheduler has been added to Ruby 3.0 and that is one
00:06:35.840
of those features that's like big you you read it in release notes it's like okay this really is something and then
00:06:42.080
you want to try it out and you don't know what to do with it right and that is that is fine because actually this
00:06:48.199
feature also is lowlevel uh it was added to C Ruby exclusively to support you
00:06:54.800
know the ient gem uh and the acing gem okay so you can see like you know this
00:07:00.520
whole ecosystem uh Paradigm really does have support from like the Ruby source code right this is not just one guy gem
00:07:07.840
this really has support from Ruby core okay just to dive into this topic a little bit more uh so fiber schedular
00:07:14.440
interface contains hooks for blocking iio operations for example you make an HTTP request you are first writing to a
00:07:21.840
socket and then you're reading from a socket when a response receives and at that point uh you know the fiber
00:07:27.599
schedular interface gives the opportunity to the IE vent gem the brains to hook up there and like while
00:07:34.319
it's waiting on an HTP request to re uh to you know response to to be received to schedule some other fiber to do some
00:07:40.960
use uh useful work instead of just waiting uh the fiber schul interface is
00:07:47.919
an awesome feature in Ruby programming language and it makes async Ruby colorless we are going to touch uh touch
00:07:53.080
on that in the example I think it's going to be clear what's going on uh as as with all these features there's like
00:07:59.199
lot lot more depth to it and I did WR like a blog post so if you guys want to dive in uh you can read more about it
00:08:06.800
fibers have been in Ruby before you know async Ruby it's like a all feature uh
00:08:13.039
and it it is a primitive for async Ruby the simplest explanation is like
00:08:18.159
lightweight threads uh the technology has been uh optimized in assembly so not
00:08:24.360
C code in assembly which is even more uh performant um in order to create a fiber
00:08:30.560
to switch between fiber and destroy a fiber you do not need to uh invoke or
00:08:37.000
make system CS this makes fibers more lightweight and faster than you know for
00:08:43.880
example working with threads this has like the fibers do have a lot more depth to it I'm not going to
00:08:50.160
go into today but if you're interested if you want to understand what fibers are Google stuff like uh stack full
00:08:56.880
cortines uh and don't Google Ruby because no one is writing about it in Ruby do Google C++ okay that so because
00:09:04.160
like most of the Innovation uh with you know fibers and like these lowlevel
00:09:09.360
Technologies happening in C++ world and Ruby is just adopting it aside from like the core libraries
00:09:16.640
and interfaces we have a number of acing specific gems um the notable mention is
00:09:22.680
acing HTTP which in my mind is like the most performant uh Ruby h HTTP client
00:09:29.800
that no one knows about it's great uh also there's a falcon which is an async web
00:09:35.279
server let's move on to examples starting with the basic ones
00:09:41.200
before jumping into async Ruby I want to kind of you know set the ground and do
00:09:46.600
like a very simple example all the examples are going to be like uh simple today so we are doing three operations
00:09:53.240
uh two HTTP call uh calls and do know that these um HTTP calls are calling
00:09:58.440
this HTTP bin and these Calles take two seconds to run plus some latency Network latency okay we're using URI open which
00:10:05.800
is built into Ruby HTTP party which is a third party Gem and then lastly PL old
00:10:11.160
sleep the operations take 7 Seconds okay this is synchronous It Go step by step
00:10:16.360
so we have to wait a little bit longer let's rewrite this example with
00:10:22.240
async Ruby I do want to you know kind of explain two concepts which is the top
00:10:28.360
level acing block block the capitalized kernel method okay that acing block um
00:10:35.000
you know opens up you know the you know um sets up the world for acing and uh
00:10:41.519
it's there in inside this Asing block that the fiber scheduler is uh set right
00:10:46.959
so for the duration of the acing block any fibers that are created inside of there will be uh you know scheduled by
00:10:53.959
the I gem okay so that's where it kicks in another concept that we have is the
00:10:59.600
acing task okay so multiple tasks run at the same time um and I do want to bring
00:11:07.279
up that uh tasks are just wrappers around fibers okay so you're probably going to hear me task fiber interchange
00:11:14.000
because they're really closely related
00:11:19.120
Concepts as a result of you know this refactoring with acing uh now total
00:11:24.160
execution time is uh just a little bit uh greater than 2 seconds I have which
00:11:29.519
indicates to you that you know all these three operations uh ran at the same
00:11:35.120
time do you remember that like async is colorless uh statement that I made earlier so uh what that means is that
00:11:42.680
for the synchronous Ruby example and this this one that is async we are using
00:11:48.440
the same methods you can compare that to uh for example JavaScript we we have all
00:11:54.399
written a little bit of JavaScript where you know in order to use these asynchronous methods
00:12:00.040
you have to for example annotate them with like async and then later on with a wait that makes JavaScript like a
00:12:06.639
colored have colored async Paradigm in Ruby it's all the same you write the same you use the same method same
00:12:12.720
interface and that is great you want to have that uh this is also a simple example
00:12:18.160
but like what happens when you have when you make a single request and then use the response to create another HTTP
00:12:25.480
request how would you do that would you use a promise like in JavaScript no Ruby you just write synchronous code so this
00:12:32.279
example is showing that like okay we're back to square one like just line number one use the result if needed uh to make
00:12:39.240
another uh HTTP request yes you can make uh like promise you can write a code
00:12:45.760
like promises in JavaScript like for example you can use like uh task. async and then wait on the result but
00:12:52.680
practically there's no need to do it you just write your code the way you would you know just simple Ruby code okay the
00:12:59.480
rule of thumb is you keep related operations in a single task so they run
00:13:05.600
synchronously and if you have unrelated uh operations then you can split them into tasks then they can run at the same
00:13:12.360
time uh the execution time of this example is pretty much not relevant but it's back to 7 seconds because it's
00:13:18.120
synchronous now nesting tasks so you can Nest task
00:13:24.240
arbitrarily so you can see you know we have a subtask and then we're calling asyn again which creates a subtask uh
00:13:31.079
subtasks can be created uh in three ways so if you have a reference to a task you
00:13:37.079
just call async creates a new task if you do not have a reference to a async uh sorry to a task variable you use
00:13:44.160
async do sorry async colon colon task. current which is a current task and then
00:13:50.079
again async lastly we have like maybe this is a little bit surprising but you can call like this capitalize async
00:13:57.440
method you know inside existing acing block and that will just create an acing
00:14:03.519
task that may be surprising uh out of these three approaches I kind of prefer the first two one because it's kind of
00:14:09.959
clear uh and I like to keep like the capitalized kernel method as just like the open Eng casing block okay but it's
00:14:17.240
fully supported if you like it you can use it this brings us to uh to the next uh
00:14:23.920
kind of Point like the next thing that I want to emphasize is you are free to extract your logic however you prefer
00:14:31.199
okay you do not have to keep your async logic inside this async block which is grows and grows 2000 l no you use
00:14:38.240
methods you use classes and modules to extract and Factor your code however you see fit and use you know the tools that
00:14:45.079
you have the takeaways from the basic example group is like okay we have this
00:14:52.079
top level acing block tasks are just wrappers around fibers and you know you
00:14:57.160
write your code synchronously and put it in a task it runs great uh so it's all just Ruby you refactor it how you see
00:15:06.360
fit before jumping into the next example group Let's uh let me describe you the
00:15:12.480
feature that we're going to build let's say you have a weather app okay and the the the biggest feature is you want to
00:15:18.000
show the weather as fast as possible to the user uh and you know there's a pleora of uh weather apis on the
00:15:25.399
internet you chose five of them because you want to be you want your app to to be super reliable so if one of those
00:15:31.399
apis is like slow unreliable flaky whatever you got it with you know with
00:15:36.440
the rest provider so what you do is once the user opens the app you trigger five HT
00:15:42.600
responses you know you use the fastest the first response and then you discard the rest of these responses the rest the
00:15:49.360
slow ones okay here's how you would do it in async
00:15:54.440
Ruby in order to accomplish this we're introducing this new class async barrier
00:15:59.880
it comes with acing gem so you kind of have it by default and it is heavily used so it's like a basic uh basic uh
00:16:06.720
basic class uh the best way to explain what acing barrier is it's a group of
00:16:11.800
acing tasks okay uh once you have a group of tasks like you know for
00:16:17.240
whatever reason you can perform actions on it if you call uh method acnc it will
00:16:23.720
create new task and add it immediately to the group if you call like stop on a group it will stop pop all the tasks in
00:16:29.839
a group and wait behaves similarly to a promise. all in JavaScript it will wait
00:16:34.920
you know it will iterate over tasks in your group and wait on each one so you use that group to create five
00:16:42.959
HTT uh uh requests you use group. async
00:16:48.880
uh so I want to emphasize that a calling async is like a pattern it almost always
00:16:54.560
creates an as task and then it does some other action associated with with the task in this case it immediately adds a
00:17:01.959
task to a group we're now making HTTP uh sorry API
00:17:08.240
uh requests and you know we have this random in the URL and we're simul just simulating slow
00:17:14.799
responses the first task the first HTP request that finish will stop this whole
00:17:20.160
group it will discard the other results because we don't care they're slow do not that we are spinning another task so
00:17:27.919
this task will not be a part of a group because it's not group. async it's just task that async and then we're using
00:17:34.559
that outside task to stop the group that this is like a small gotcha because we do not want to use any of the tasks
00:17:40.760
inside of a group because it might kill itself and then it will not kill the other other tasks Okay small gotcha you
00:17:47.400
can see the output it's kind of predictable right where we are starting five tasks and then like the first task
00:17:53.720
that finishes is a task number one it kills the other ones
00:18:00.880
another feature you have an uh API provider and it has a limitation you can
00:18:07.520
only make two HTTP uh connections HTTP requests at the same
00:18:15.400
time here's how you would do it with Asing kruby we are introducing another
00:18:20.440
concept another class that that's pretty basic that comes with the acing gem itself it's called acing semaphor we are
00:18:27.360
instantiating that class with a limit of two right that's our limit for the uh API
00:18:34.640
provider then we're using like semap for. acing this is the same pattern that I mentioned earlier so it does create
00:18:41.360
another acing task and it does something else implicitly in this case it uh limits the execution of a task to two at
00:18:48.360
the same time and then we're performing our work
00:18:54.559
and this is the output that you can see on the screen uh if you look at the starts and ends you can see that any
00:19:00.640
given point in time the maximum of two tasks are executing at the same time okay so one task has to end in order for
00:19:08.760
the other one to start the takeaways from these two last
00:19:15.320
examples are asynchronous tasks are really easily controllable you can group them like wait on them uh spin new task
00:19:22.559
really easily so yeah they're really manageable really manageable uh this is my favorite uh
00:19:30.120
example Group which is scaling in order to show it uh I want to
00:19:36.120
introduce a couple other operations we so this is like starting from the basic example task where we have HTP request
00:19:43.360
and sleep the fourth operation fourth task and I'm adding like a redice operation which is like just like a
00:19:49.240
random operation with a time out of two so this operation will take two seconds uh I introducing a SQL gem which is
00:19:56.520
running a postgress request uh sorry post query which also takes 2 seconds is
00:20:01.600
pretty much sleep and then lastly at the bottom the last operation is uh making
00:20:07.200
spawning system process which is a simple sleep of two seconds you can see
00:20:12.280
at the bottom the total execution time is 3 seconds right so uh these tasks run
00:20:18.320
at the same time and there is some latency introduced by the first task which is uh HTTP
00:20:24.640
request okay let's run all these tasks 10 times so we do 10 times that is the
00:20:32.440
only change here and we are now up to 50 tasks you can see the total execution
00:20:37.679
times uh slightly below 4 seconds so yeah I mean there's some like overhead
00:20:42.799
but these tasks run at the same time let's crank things up 100 times
00:20:48.000
repeat everything we're up to 500 tasks uh if we see total execution time it's
00:20:53.679
slightly before 5 seconds so yeah there is some overhead that is uh accumulating
00:20:59.159
but yeah this is running at the same time in order to get this example to work I did have to like so I was running
00:21:04.480
all this on my laptop I did have to tweak my local postgress uh configuration uh to increase the mass
00:21:11.000
Connections in order for to get this to work but other than that it just worked and the last and my favorite
00:21:18.200
example is we are cranking things up a thousand times okay this is now running 5,000 operations at the same time with
00:21:25.640
asnr B in order to do that I had to switch uh instead of u. open I'm using
00:21:32.120
now async HTTP which is like the gem I mentioned which is um great uh the big
00:21:37.440
guns and Asing HTTP is using http2 so uh the benefit of it it is uh
00:21:44.400
only creating one TCP connection performs one HTTP handshake so it
00:21:49.600
radically reduces the overhead of making a th000 HTTP requests total execution time again uh
00:21:58.120
with some overhead it's eight uh slightly below 8 seconds so yeah these 5,000 operations do run at the same
00:22:05.279
time the takeaways from this example group look I mean I think it's this is easy to scale it's really easy to crank
00:22:12.080
up the numbers do something mind boggling and the only thing really that you have to take care of is like well
00:22:19.320
don't crash your database or you know don't crash you know the target uh HTTP Target API server that you're connecting
00:22:25.760
to you know you have to be careful okay so I bet that someone in this
00:22:34.240
audience there is a skeptic among you saying well Bruno you know acing is fine
00:22:39.400
but I could do everything that you just showed with plain old Ruby
00:22:45.240
threads okay fair enough and I want to address you know I want to because of that like you know acing is compared to
00:22:52.120
threads very often so I want to do like you know comparison Asing kby with threats and there are some pros and cons
00:22:58.720
to both let's like revise the ideal architecture with Ruby threats and by
00:23:05.360
doing this keep in mind like the popular like sidekick and puma and how they work we know we all know that so sidekick and
00:23:11.960
Puma at the start of the process they start five or 10 or 20 you know 20
00:23:17.320
threads these threads live forever or practically until the process ends and
00:23:24.120
you know during the life of the pro process these threats are kept as
00:23:29.240
isolated as possible so yes you can group threads and manage them but you do
00:23:35.360
do not want to do that you know for successful you know um you know really successful projects you know there are
00:23:42.120
kept as isolated as much as possible because otherwise you get bucks and you know uh then these shads pull work from
00:23:49.799
either like a shared queue or in the case of Puma they reading requests from a socket do work and output responses
00:23:59.679
a quick comparison with fibers so you know with uh with threads like 20 30 50
00:24:05.960
uh that's like pushing the limits with fibers you you can go thousands plus uh
00:24:11.679
threads are like monolithic like you start them you do not touch them you do not like intervene them uh fibers and
00:24:18.840
like asyn Ruby tasks are much more controllable manageable you know you can do make them do work for you and lastly
00:24:26.440
with fibers there's no special architect that you have to employ right you just write your code synchronously and it
00:24:32.320
will work inside async let's do more comparison of asnc with versus threads okay so threads the
00:24:39.840
basic construct is operating system threads with asnc it's fibers it's all
00:24:44.880
implemented in Ruby Source codes so there are no system codes that need to be uh performed for for a fiber to be
00:24:51.960
created or switched because of that like you know async fibers or just fibers in general
00:24:58.559
do have like a relatively small uh overhead and threads do have some bigger out uh overhead the the concurrency with
00:25:06.360
threads look if you are running 100 threads per process you're really
00:25:11.600
stretching uh stretching those boundaries and you're you are really pushing it with aing like tens of
00:25:17.279
thousands is no problem and the highest number that I've heard of is um the creator of async um ecosystem Samuel
00:25:24.799
Williams has run server with two million uh concurrent uh fibers with ASN
00:25:33.159
Ruby this is a deep topic but scheduling it uh with threads it's called preemptive like I'm going to simplify
00:25:39.520
this is not scientific but with preemptive scheduling you kind of get random switches from thread to thread
00:25:45.200
like you know uh a thread can be switched at any given point of time with theying this the scheduling the uh
00:25:52.320
scheme is called Co-operative which means uh the iio event gem will switch from fiber from to fiber only when uh
00:26:00.440
the first fiber or first task kind of declare okay I'm not I'm I'm not waiting it's okay to switch and then the the
00:26:07.200
scheduler switches fibers control you know how hard is the control threads it's possible you know
00:26:14.799
threads do have constructs like thread groups they have condition variables they have mutexes they have various
00:26:20.559
synchronization Primitives but look it's hard if you can avoid it you will avoid it right think of like how sidekick and
00:26:27.320
Puma work they keep things ID related and they successful libraries uh we they we as we saw in the examples you do get
00:26:34.360
that fine grain you know manageability of uh of fibers common errors with
00:26:40.039
threats you off if you try to you know employ mutexes um thread groups you know um you will
00:26:47.919
shoot yourself in the fot it's very very common to have bugs with threaded code
00:26:53.039
uh with acing the main concern really is you going to crash your database or you're going to crash a server or you're
00:26:58.559
going to get rate limited is it so for these reason above
00:27:03.799
right is it really advisable to write threads as your application code well be really really careful you
00:27:10.720
know really really careful be make sure that you really know what you're doing and do dedicate weeks to debug your
00:27:17.559
threaded code to run it to make sure it doesn't have bugs with they it's like it's common you know it's common to have
00:27:24.240
acing code like just written as a part of your application
00:27:30.360
this there is like a couple advant advantages to threads they have been with us since like practically forever
00:27:36.840
so there is a big body of knowledge around Ruby threads okay you have books there's like a lot of question and
00:27:43.240
answers on stack overlow so if you get stuck in the thread with a thread question with like an advanced scenario
00:27:48.600
you probably will find an answer Asing the the current iteration has been with us since Ruby 3.0 so look I mean uh
00:27:57.080
there are good guide decent guides uh the community is great really responsive willing to help but if you have like
00:28:03.760
that like Advanced scenario you will probably end up reading source
00:28:10.360
code threads are better for CPU work okay so if you have CPU work uh you know
00:28:17.159
a thread scheduler will make sure that each thread kind of gets a portion of CPU time we they think if a single fiber
00:28:24.240
has a long running CPU task it will run until it completes during that time other FIB will not get
00:28:31.080
scheduled so they will not get any CPU time in a way in a that that's called like a CPU starvation or uh you know in
00:28:38.640
this context the fiber starvation so oh if you have F uh CPU work threads do
00:28:43.919
have a slight Advantage uh so for that reason you know if you have like a classic rails app uh that has like a lot
00:28:50.519
of view rendering like a lot of CPU uh work with a view generation like a 50
00:28:56.360
100 100 millisecond is spend on view generation I would give like a slight
00:29:01.880
advantage to threaded web server like a classic uh classic example is Puma uh
00:29:07.559
for you know classic rails app uh with async we do have a falcon which is an async web server it's great but do make
00:29:15.159
uh sure that you have a fitting use case uh before using it right so either you have like an API only application that
00:29:23.559
uh does heavy IO does heavy networking heavy HTTP request or
00:29:30.279
similar so I do want to promote Asing Ruby there are really like ideal use
00:29:35.960
cases for it so I have done like you know a crawler I testify it's great if
00:29:42.279
you have like a chat app if you have any kind of a streaming example great fit websockets also work
00:29:50.840
great some of the other things that I've done and I want to you know get this out because really this is not a question
00:29:58.039
should I use shads or should I use async Ruby or fibers you can use both and
00:30:03.440
these are the example that I've done and both involve running many HTP requests so inside of a sidekick job I have run
00:30:10.080
many uh you know to to kind of make things faster I have run async with
00:30:15.320
doing many HTP requests and same with Puma you know I had situation where I have to do uh like four or five requests
00:30:22.880
that go at the same time and I've done it in a rails controller of course factored properly but it's all been
00:30:28.399
working inside puma and it is great so uh a couple common questions
00:30:35.320
like is does it working with rails yes I mean uh Jean basier I think is the guy that is pushing this work forward it
00:30:41.480
does work with rails there is a guy on that had like this speech about running
00:30:46.840
you know acing the Falcon server with rails it works normally so you can check
00:30:52.600
out this YouTube uh is it production ready you know is does it crash uh I run
00:30:58.279
a application in production it does run just fine there's nothing that makes me
00:31:03.760
you know say oh you know this crashes often or anything it is stable lastly you know if I kind of got you interested
00:31:10.559
and you want to try it out where do you get started I think the best place to start is the official repository as I
00:31:17.080
mentioned the docks are really good there are official guides but look this
00:31:22.200
is like a new paradigm and you may get stuck do ask questions the community is responsive it's helpful
00:31:28.720
but again if you kind of push it if you have like an advanced use case be prepared uh to you know kind of dive
00:31:34.919
into the source code and read it thank you all I mean as I mentioned I
00:31:40.320
am really excited to talk about this project so I'm thinking we do like a small uh async workshop at 4 p.m. that
00:31:47.039
is in about 50 minutes uh restaurant second floor if you guys have uh
00:31:52.679
questions feel free to ask I'm really interested in know kind of helping you get started
00:31:58.399
and we can even do like more advanced examples demos there's like some mindblowing uh examples with async if
00:32:05.000
you want to reach out feel free to contact me via email you can find it on my web page
00:32:10.279
bruno.com thank you everyone