RubyVideo becomes RubyEvents! Learn more about it here.


Summarized using AI

Fluentd: Data Streams in Ruby World

Satoshi "moris" Tagomori • July 22, 2014 • Singapore • Talk

In this talk presented at the Red Dot Ruby Conference 2014, Satoshi "moris" Tagomori introduces Fluentd, an open-source data collector specifically designed for the Ruby ecosystem. Fluentd addresses the growing need for efficient log management and real-time analytics as data streams increase in size and variety. This presentation covers the functionality of Fluentd, its plugin architecture, and its development trajectory, highlighting case studies from Tagomori's work at LINE Corporation.

Key Points Discussed:

  • Introduction to LINE Corporation:

    • LINE has over 470 million users and processes billions of messages daily, necessitating robust log handling and data metrics solutions.
  • What is Fluentd?

    • Fluentd is a log collector middleware aimed at simplifying log management across systems and applications, written in Ruby and designed to be easy to deploy and extend.
  • Challenges in Log Management:

    • Traditional methods of log handling (e.g., using tail commands, custom scripts) can lead to chaotic environments, making it difficult to maintain control over logs and metrics.
  • Fluentd’s Core Features:

    • Event Structure: Fluentd organizes data into three main components: tags for routing, timestamp for event timing, and records in JSON format.
    • Configuration Flexibility: It allows flexible configuration, using a format akin to Apache’s configuration, with sections for inputs and matches.
    • Plugin System: Fluentd has an extensible plugin architecture with over 300 available plugins for input, output, and buffering. These plugins facilitate connectivity with various storage systems and external middlewares such as Hadoop, MySQL, and Elasticsearch.
    • Stream Processing: The software can handle events, process them in real-time, and forward results to other nodes or storage solutions, accommodating complex event processing tasks and analytics.
  • Use Cases at LINE:

    • LINE employs Fluentd to manage log data efficiently across multiple servers. Two main clusters have been established—one for delivery and stream processing, and another for aggregation and further processing of data logs, ensuring high performance and reliability.
  • Future Development:

    • The speaker discusses upcoming enhancements in Fluentd, including planned features for major versions, potential support for JRuby, and improvements for running on Windows systems.

Conclusion and Takeaways:
- Fluentd streamlines the management of large-scale data streams and provides essential tools for monitoring and processing logs, making it a valuable asset for organizations handling extensive metrics.
- Its flexibility, robust plugin ecosystem, and continuous development position Fluentd as a leader in log management solutions for Rubyists. Tagomori encourages the audience to consider Fluentd for their data collection needs to simplify and control their logging processes.

Fluentd: Data Streams in Ruby World
Satoshi "moris" Tagomori • Singapore • Talk

Date: July 22, 2014
Published: unknown
Announced: unknown

Data streams (ex: logs) are becoming larger, while analytics in (semi-) real time is also becoming more important today. We want to collect huge data sets from many sources, and analyze these data in various way to gain valuable business insights. For these purposes, software on jvm (Hadoop, Flume, Storm, ...) works well, but we (Rubyists!) need more Ruby-like, scriptable, ease-to-deploy and extensible software. That is Fluentd.

I'll introduce Fluentd, a famous log collector in Japan, and its plugin systems. Fluentd is a very simple and well-designed software that have various plugins to collect/convert/aggregate/write stream data, which are already used in many production environments. And I will also talk about Fluentd's development plans, newly implemented features and case studies in LINE.

Red Dot Ruby Conference 2014

00:00:14.920 uh
00:00:20.720 okay and in this episode and i will talk
00:00:23.680 about frenzy and that is a log collector
00:00:26.000 middleware
00:00:27.039 and listen in ruby
00:00:43.120 and that is my twitter and github
00:00:45.360 account name
00:00:46.480 and and i am and working in online
00:00:49.520 corporation
00:00:50.480 and line corporation is a company and is
00:00:53.920 serving a messaging application online
00:00:57.280 uh please let me ask in two questions
00:01:01.680 first who knows line
00:01:06.000 whoa fantastic thank you
00:01:09.040 thank that is very good very good news
00:01:11.040 for my co-workers
00:01:12.560 and one more questions one more question
00:01:16.000 who uses ryan oh
00:01:20.240 not so many people using but thank you
00:01:23.360 thank you very much and
00:01:24.840 i will tell to our
00:01:28.159 marketer uh that and you should
00:01:31.439 work hard in singapore so okay okay and
00:01:35.360 online today online has a 470
00:01:40.320 million users or more and and we are 110
00:01:44.640 billion messages
00:01:45.680 or more 10 billion or more messages
00:01:50.560 per day so and we have in many
00:01:54.000 various services and applications on
00:01:56.240 online platform
00:01:59.600 that is that that is why we
00:02:03.439 must handle and many kind of metrics
00:02:07.119 and and many various kind of blogs and
00:02:10.560 many
00:02:11.680 a huge amount of logs and that is why
00:02:15.360 and i
00:02:15.920 am a user and committer of friendly
00:02:19.280 project
00:02:21.280 uh one more question and who knows
00:02:23.760 fluency
00:02:25.599 oh oh some people knows frontier
00:02:28.239 fantastic
00:02:29.440 but and many of our people does not
00:02:32.879 know as friendly and that is why i am
00:02:35.760 talking
00:02:36.480 uh now here about three d
00:02:39.680 okay and friendly uh is an open source
00:02:42.480 data collector to simplify
00:02:44.160 a log management uh that is from android
00:02:47.280 offshore website
00:02:48.879 uh before friendly and we must handle uh
00:02:52.319 logs and uh power for each
00:02:56.480 sources and definitions and we we are
00:03:00.159 we must read and logs by k and tail
00:03:03.360 commands and
00:03:04.560 and process and these rocks by any
00:03:07.280 scripts
00:03:07.920 and post metrics into clusters
00:03:11.360 and and the other side and we
00:03:14.400 we execute and secure scp by chrome
00:03:18.720 and and copy errors log files into
00:03:22.720 storage systems and we we must consider
00:03:26.239 an error handling
00:03:27.360 and bufferings and
00:03:30.720 more uh we must and hundreds and many
00:03:34.159 and pairs of sources and destinations
00:03:38.000 and lootings and and amazon api keys and
00:03:41.680 rogue formats this is very very complex
00:03:46.799 processes and situation
00:03:50.400 becomes and chaos okay and friendly
00:03:53.920 and solve this problem and and our
00:03:56.400 situation is very controllable
00:03:58.239 and friendly and also does and
00:04:01.200 formatting and buffering and retries
00:04:03.599 and all things okay
00:04:06.879 actually friendly is friendly is open
00:04:09.120 source data collector
00:04:10.239 written in ruby and lands on crb and on
00:04:13.599 unix like oss and like in linux or
00:04:17.199 mac os 10 or bc on and others
00:04:20.320 and currently is with an error handling
00:04:23.759 and routines in its course and trendy
00:04:26.800 has a
00:04:27.360 plugin systems and for us and input
00:04:30.880 plugins and also plugins and buffer
00:04:32.800 plugins
00:04:33.520 and friendly has and many built-in
00:04:36.240 plugins
00:04:37.759 and friendly is distributed on
00:04:39.720 rubygems.org
00:04:40.880 so we can install and run d
00:04:44.479 by simply in gem install friendly and
00:04:47.600 friend this plugin is also distributed
00:04:49.680 on rubygems.org so
00:04:52.000 so we can install plugins and by same
00:04:55.120 way
00:04:57.680 rpm and dev devs and binary arc packages
00:05:01.280 are also available by
00:05:03.360 treasure data that is uh thus is the
00:05:05.840 company hosting
00:05:06.800 uh friendlies developing development
00:05:11.360 okay why friendly that is
00:05:14.479 there are there is very big reasons
00:05:17.680 friendly slow is so cute
00:05:21.680 that is very very important and
00:05:24.720 there are many logos and moscow
00:05:27.360 characters in
00:05:28.720 especially in big data software world
00:05:32.880 like he yes yes he's so cute
00:05:37.280 yeah anyway uh my friend
00:05:42.160 friendly has many good points and and i
00:05:44.720 will talk about these points
00:05:46.400 and for uh for now
00:05:50.320 at first this is friendly event
00:05:54.080 friendly event is very simple and
00:05:56.800 friendly event
00:05:57.759 is and built on three elements and tag
00:06:01.199 and time and record
00:06:05.039 rent this tag and is used for routing
00:06:07.680 and
00:06:08.160 and tag is especially uh actually a
00:06:11.520 dot separated string and
00:06:14.960 time time is in unix time so and
00:06:18.560 64 64-bit
00:06:21.600 integer and and
00:06:24.800 time is for time is formatted for the
00:06:28.639 our local times and local formats
00:06:31.919 and record record is actually json
00:06:34.080 objects and
00:06:35.919 and also and hash object with string
00:06:38.560 keys
00:06:39.120 in ruby world and record record
00:06:42.479 record is does not need any schema
00:06:46.240 so we can put any data into record
00:06:52.240 and next and friendly this is
00:06:54.080 friendliest configuration
00:06:55.360 and frantic this printed configuration
00:06:58.000 syntax is
00:06:58.960 as same as an apache apache web server
00:07:02.400 and and this is have sections
00:07:06.160 and source sections for input plugins
00:07:09.840 and match sections with tag buttons
00:07:12.880 are for output plugins
00:07:16.080 and we can write and any number of the
00:07:18.960 source sections and
00:07:20.319 match sections in and one friendly
00:07:22.960 configuration
00:07:24.960 and friendly also have ruby dsl
00:07:27.520 configuration syntax
00:07:29.599 this is this configuration configuration
00:07:32.160 example
00:07:32.720 and express is uh just same as in
00:07:35.759 previous
00:07:37.199 apache configuration and
00:07:40.639 and also of course we can write any ruby
00:07:43.680 code
00:07:44.080 in this configuration to extract groups
00:07:47.599 or any enumerators
00:07:54.240 okay and next and tag based rooting
00:07:57.599 currently does target-based routing and
00:08:00.400 friendly uh
00:08:01.360 can't have an any number of input
00:08:03.599 plugins and
00:08:04.639 inputs plug-ins emits events into
00:08:07.440 friendly core
00:08:08.960 and friendly core makes mix
00:08:12.160 these events at once and then
00:08:15.360 friendly core loots its event these
00:08:18.240 events
00:08:18.960 by tag patterns to
00:08:22.160 and each outwit plugins
00:08:25.520 how to output plugins uh process these
00:08:28.720 events and
00:08:30.000 by these uh their own
00:08:33.279 configurations
00:08:36.479 and act and friendly output plugin can
00:08:41.839 uh emit events into
00:08:45.040 friendly core so and this
00:08:48.880 this style from the output output plugin
00:08:53.279 performs and like filters
00:09:00.640 okay and trendy has and already
00:09:03.680 currently already have and many public
00:09:05.920 plugins and
00:09:07.200 300 or more plugins and we can install
00:09:09.920 these plugins over
00:09:11.360 from rubygems.org by gem commands
00:09:17.279 okay and next 20 patterns and
00:09:20.560 and we let's see and what friendly can
00:09:24.160 do
00:09:26.640 this is the first and most simple
00:09:28.080 pattern friendly
00:09:29.760 release lines from file
00:09:33.120 and pass pass it into friendly events
00:09:37.279 and then currently formats these events
00:09:40.800 and
00:09:41.279 write by lines and into files
00:09:45.680 this is very simple and by changing an
00:09:49.040 output plugins
00:09:50.399 we can light an events to any uh
00:09:53.519 storage systems or external and
00:09:56.080 middlewares
00:09:56.880 like mongodb mysql elasticsearch
00:10:00.959 hadoop hdfs amazon s3
00:10:04.160 amazon or and google bigquery
00:10:08.079 and many more uh plugins exist so and we
00:10:11.200 can
00:10:11.839 do we can write and events into
00:10:15.200 many more and materials and external
00:10:17.360 strategy systems
00:10:21.200 and next pattern friendly can
00:10:24.240 receive events data from other friendly
00:10:27.760 nodes
00:10:28.480 and friendly cam forward events later in
00:10:32.160 to another friendly node over tcp over
00:10:35.279 http
00:10:36.959 and friendly can also apparently can
00:10:40.079 also
00:10:40.880 receive events from french drawer
00:10:44.480 client library of each programming
00:10:47.040 languages over
00:10:48.560 tcp
00:10:51.680 and frequently forward plugging and have
00:10:55.440 load balancing and active standby
00:10:58.640 features so
00:10:59.839 this is very useful for and
00:11:03.680 make friendly clusters
00:11:07.600 and and friendly the secure world
00:11:09.920 blogging
00:11:10.800 and can provide and forwarding over and
00:11:14.880 internet and
00:11:15.920 and over ssl with authentication
00:11:22.160 and okay friendly can connect with
00:11:25.920 other middleware like a syslog a
00:11:28.880 bathroom
00:11:30.000 a facebook scribe or a batch kafka
00:11:33.360 and and friendly can inputs and events
00:11:36.240 from these mirrors
00:11:37.839 and currently can output
00:11:40.880 events into this middleware
00:11:46.079 okay next and friendly a can copy events
00:11:50.240 so and currently
00:11:53.920 writes uh events into hadoop hdfs
00:11:58.079 and at the same time and friendly can
00:12:00.639 forward
00:12:01.519 and just event into another friendly
00:12:07.360 okay and friendly can count events
00:12:11.040 and by string regression expression
00:12:14.480 patterns
00:12:15.120 so and friendly and can reports
00:12:19.279 pattern legal expression pattern one
00:12:23.600 events with uh regular expression person
00:12:26.639 one
00:12:27.680 are 60 and second is
00:12:31.279 20 and also uh
00:12:34.720 by this strange expression
00:12:38.320 and also frantic
00:12:41.360 can count events by numerical numeric
00:12:44.000 value ranges
00:12:46.240 and when they can aggregate numeric
00:12:50.000 values uh
00:12:52.240 by numeric values right
00:12:55.360 and then outputs and maximum value and
00:12:58.320 minimum value
00:12:59.279 average average and and summations
00:13:03.120 and for example nine percentiles
00:13:07.680 this plugin is very useful for
00:13:12.240 http servers and response times and
00:13:15.440 and or more use cases
00:13:21.680 and friendly has and many other various
00:13:24.800 inputs
00:13:25.360 like and reset this that is
00:13:28.560 a aspect about a linux performance data
00:13:33.920 or
00:13:36.079 in sql sql plugin does a sql
00:13:39.360 select statement and handle its
00:13:42.160 statements results as
00:13:43.839 in front the input fronties from this
00:13:46.480 input
00:13:47.199 events and moreover and
00:13:50.880 friendly has an execute plugin and
00:13:54.160 execute plugin called execute an
00:13:56.720 external
00:13:57.920 and any other external command and
00:14:01.600 under handle its outputs as
00:14:04.639 friendlies inputs but
00:14:07.680 actually we can do anything with this
00:14:10.480 plugin
00:14:12.720 and also friendly has and many various
00:14:14.800 plugins output plugins
00:14:16.560 like and notification uh to not notify
00:14:19.920 on ilc
00:14:22.639 like this like this this is a
00:14:25.920 100 percent of and response code
00:14:29.199 is in and 400 is that is very bad
00:14:32.639 situations
00:14:34.240 and and same as hit picture
00:14:38.079 or uh currently render numeric values on
00:14:42.160 graph tools and also
00:14:45.839 and friendly out execute plugin can do
00:14:49.440 anything by
00:14:50.639 an external external command
00:14:56.399 and one more uh one more on the last
00:14:59.120 pattern
00:14:59.920 and friends the output program can emit
00:15:03.199 events into reference equals so that is
00:15:05.760 just works
00:15:06.720 as filter that is this is one example
00:15:10.320 uh friendly uh execute execute filter
00:15:13.360 plugin
00:15:14.240 you can and use an external command
00:15:17.760 with filter with filter as filter like
00:15:21.360 and unique unix command command
00:15:24.839 pipelines
00:15:26.880 so this is a perfectly in stream
00:15:30.320 processing
00:15:31.360 and filter so friendly print these input
00:15:35.120 events are processed processed as
00:15:38.959 stream processing and and outputs
00:15:42.560 apparently can output and these results
00:15:45.360 into
00:15:45.839 any other storage systems or on other
00:15:48.480 nodes
00:15:51.199 and when we are using a
00:15:54.399 stream processing rpc server nordica
00:15:56.800 nurikla is a strength uh
00:15:58.320 nuclear ad provides in stream processing
00:16:02.160 with sql
00:16:03.680 and friendly and regular plugin frenzy
00:16:06.560 output numerical output plugging
00:16:08.959 sends events into nulligra and nullicra
00:16:12.320 process these events by sql and then
00:16:15.600 unfriendly fed these and sql results as
00:16:19.120 create these events and and send
00:16:23.360 these events into uh another
00:16:27.279 another nodes and another systems
00:16:32.320 and this is their preset and friendly
00:16:35.360 does error handling and retries
00:16:37.279 for all these plugins this is very
00:16:39.759 useful
00:16:42.240 for these reasons freddy can resolve
00:16:45.600 these
00:16:46.399 these other problems of these chaos
00:16:49.279 situations
00:16:50.240 into and this uh controllable situations
00:16:55.120 okay and friends these versions uh
00:16:58.079 friendly background these versions
00:17:00.079 friendly's latest version is in version
00:17:03.360 zero to 10.50 and this version
00:17:07.039 is released on uh mid middle of this
00:17:10.959 month
00:17:11.839 and and 20s and 0.10 dot expansions are
00:17:16.160 stable versions and and latest versions
00:17:19.679 and many more and many minor features
00:17:21.919 and feature updates and bug fixes are
00:17:24.559 also
00:17:25.280 included and new features
00:17:28.480 for version one is also included
00:17:33.200 francie and bajan bar is planned
00:17:36.720 as the first major major release major
00:17:39.919 release
00:17:40.480 and planned
00:17:44.000 some day in this year and friendly
00:17:46.720 version one is
00:17:47.919 completely compatible with kind of these
00:17:51.200 currents these latest versions and
00:17:54.320 also have new additional features and
00:17:57.600 on this and roadmaps and
00:18:00.640 for example our new configuration syntax
00:18:02.799 and and
00:18:03.840 new plugin backends or demo process
00:18:07.039 managements and
00:18:08.400 and multicore cpu supports
00:18:12.640 okay and frankly uh current friends da
00:18:15.360 runs on
00:18:16.320 siri and only unix record systems
00:18:19.919 but and friendly on gervy is under
00:18:22.960 development
00:18:23.679 and and our developer is and
00:18:27.120 we're trying trying to support and cool
00:18:31.120 io
00:18:31.840 and for derby uh trying to fix and cure
00:18:34.799 io to support jeremy
00:18:36.080 and jeremy is a uh friendly is based on
00:18:39.120 cure io that is a event event
00:18:42.480 so events driven programming library
00:18:46.880 so this uh this fix if
00:18:50.240 when this fix will be completed and
00:18:53.600 we can we can start to support
00:18:57.360 we can start to support uh jruby on
00:19:00.480 forefront z and also
00:19:04.080 friendly on windows is under development
00:19:07.200 and
00:19:08.080 windows branch is exists on friendly's
00:19:11.200 repository and
00:19:14.480 95 percent are workings but
00:19:17.679 not complete and some
00:19:20.880 developer is working for and friendly on
00:19:23.600 windows
00:19:25.840 okay and and use case in line
00:19:32.000 and this is a analytics data flow of
00:19:35.200 abuse and
00:19:36.160 and we are using a friendly cluster
00:19:39.760 for uh to and
00:19:42.799 collect log data and and roots and
00:19:46.240 and doing stream stream processing
00:19:50.320 and friendly crosstalk out
00:19:54.640 we have and two major and friendly
00:19:56.559 cluster and
00:19:57.679 one is a deliberate one is for delivery
00:20:00.559 and
00:20:01.120 stream map processing and and and second
00:20:04.640 is
00:20:05.200 for an aggregation and stream reduce
00:20:07.840 processing
00:20:09.679 and in detail on we have and fall
00:20:12.240 friendly cluster and
00:20:14.240 and the rebar cluster does
00:20:19.679 thereby cluster and store logs are
00:20:22.720 the request that collects logs from many
00:20:25.280 servers
00:20:25.840 and store and these rocks into an
00:20:29.280 archive storages and and the copy
00:20:32.840 uh copy and draw road bouncing
00:20:36.480 and send a next cluster next crush the
00:20:39.440 walker cluster walk across the does and
00:20:41.760 pass these rocks and store and these
00:20:44.640 rocks into and hadoop hdfs
00:20:46.960 and also forward and these drugs into
00:20:50.240 stream processing classes and
00:20:54.080 watch a cluster watch across the does
00:20:56.000 and and
00:20:57.360 does and processing for monitoring and
00:20:59.520 not notifications
00:21:01.440 and and then and we can receive
00:21:05.280 a lot and notifications and for
00:21:08.480 uh for too many error
00:21:11.840 responses and and other situations
00:21:15.360 and also on the weekend we have an sap
00:21:18.080 and complex event processing cluster
00:21:20.320 and this is a general purpose stream
00:21:22.240 processing and
00:21:24.720 and our application and engineers
00:21:28.400 like sql and put into this noriko
00:21:31.520 cluster
00:21:32.159 so an original cluster process and
00:21:34.240 different events from
00:21:36.480 friendly and and each application
00:21:41.679 and and friendly and store these results
00:21:45.440 into
00:21:46.240 and each application metrics and
00:21:48.159 storages
00:21:52.080 now we have and
00:21:55.280 many of our servers and for uh friendly
00:21:58.480 but that is not a big problem so
00:22:02.480 and we are 105.5 or more billion
00:22:06.240 events and uh uh per day
00:22:09.600 and and at peak time and and
00:22:14.280 150 or more events per second
00:22:17.200 so and frankly can uh affirmatively have
00:22:20.240 an
00:22:20.799 enough performance enough high
00:22:23.280 performance for
00:22:24.480 these events so and
00:22:28.720 and enough and stable so and friendly
00:22:32.000 does not fail uh
00:22:33.200 in this uh two years so and we can
00:22:36.720 we are very happy to uh using front d
00:22:41.840 okay this is wrap up and and friendly is
00:22:44.799 very
00:22:46.240 flexible systems and and we have and
00:22:48.640 many and plugins and
00:22:50.240 and we can light any plugins easily so
00:22:54.559 20 is very i believe that and friendly
00:22:58.799 is very good software and
00:23:00.640 friendly and ca can make you happy if
00:23:04.320 you
00:23:04.640 are trying to collect
00:23:07.840 any many kind of drugs or
00:23:11.360 try to start and stream processing and
00:23:14.000 in
00:23:14.720 and very easy way thank you
00:23:25.120 thank you satoshi do we have any
00:23:27.360 questions for him
00:23:28.320 about fluenty or line or anything
00:23:37.760 oh okay and please and ask me questions
00:23:41.039 in very slow english
00:24:00.559 about how do you handle
00:24:04.720 the cases that we lose the data
00:24:08.400 so any cases that we can use the data
00:24:12.080 and how we handle that
00:24:13.679 and and the question is about and false
00:24:16.559 tolerance
00:24:17.760 or not and not failure or any other
00:24:20.799 problems
00:24:21.600 so yeah uh australians as
00:24:26.960 okay and friendly friendly hub and
00:24:34.320 no rebel force tolerance for no driver
00:24:36.559 for students
00:24:37.520 and friendly hub and fire buffer for
00:24:40.240 this problem and
00:24:41.360 friendly and friendly have and many
00:24:43.679 types of buffer plugins and
00:24:45.600 file and with file buffer and friendly
00:24:48.559 and
00:24:49.279 lights these events into file
00:24:52.559 as buffer and that is and that's and
00:24:55.440 causes a performance
00:24:56.880 penalty but once uh
00:25:00.720 events uh are written in fire buffer
00:25:04.640 and friendly does not lost uh
00:25:09.600 unless uh front disk is broken
00:25:13.679 so and in front of this uh
00:25:16.799 uh cluster labor force for
00:25:19.840 frantic and cluster level force
00:25:21.360 tolerance and we can
00:25:23.360 and copy events and and forward
00:25:27.200 into another node so and and we can also
00:25:30.320 use
00:25:30.799 load balancing and active standby so
00:25:34.960 and if and we uh duplicate
00:25:38.559 these events and write and two different
00:25:42.080 and presses press presses so and one
00:25:45.200 place uh
00:25:46.320 one node and if one node and fails but
00:25:49.520 and events are also saved but
00:25:52.640 unfriendly hub and no uh acknowledge
00:25:56.080 systems so and friendly yes and friendly
00:25:59.840 hub friendly does acknowledges by and
00:26:03.360 only tcp
00:26:04.559 and and handshakes and and acts so
00:26:08.720 and and we uh
00:26:12.400 we are not we have no plan to
00:26:16.640 acknow to support acknowledgements
00:26:19.679 for and classifier acknowledgements and
00:26:22.400 that is
00:26:23.039 just how just bring some very big and
00:26:25.120 performance penalties so
00:26:26.799 uh we're not planning planning
00:26:30.000 and for acknowledgements for and
00:26:32.480 performance
00:26:33.200 uh reason okay
00:26:37.360 thank you
00:26:40.400 any other questions
00:26:46.080 well we we use fluently in production
00:26:48.080 and and it's changed our lives
00:26:50.559 it's like uh exactly you used to you
00:26:52.240 know the diagram which actually could
00:26:53.760 you go back to the diagram
00:26:55.520 the dependency that i mean
00:26:58.559 yeah that one yeah yeah i don't know
00:27:00.480 that yeah
00:27:02.000 yeah yeah that that kind of summed up
00:27:03.760 our life for that
00:27:13.520 yes friendly so our problem
00:27:16.960 any reason why if you go back why python
00:27:20.080 on top is it because it only handles the
00:27:22.240 easy thing this is my little joke
00:27:30.720 all right okay thank you satoshi thank
00:27:32.960 you thank you very much
00:27:59.520 you
Explore all talks recorded at Red Dot Ruby Conference 2014
+20