- John-David Dalton (twitter github blog)
- AJ O’Neal (twitter github blog)
- Jamison Dance (twitter github blog)
- Joe Eames (twitter github blog)
- Merrick Christensen (twitter github)
- Charles Max Wood (twitter github Teach Me To Code Rails Ramp Up)
01:32 – John-David Dalton Introduction
02:19 – jsPerf
07:48 – Lo-Dash
14:50 – Performance
21:07 – Performance Optimization
25:53 – Use
33:53 – Competition
40:32 – Testing Performance
46:00 – Optimizations to Improve Performance
50:34 – Async
- Home Depot (AJ)
- Sam’s Club (AJ)
- Carrabba’s (AJ)
- Sizzler (AJ)
- The Michael J. Fox Show (Joe)
- Skinit (Joe)
- nodist (Joe)
- nave (John-David)
- <dialog> element: Modals made easy (Merrick)
- 1Password (Merrick)
- cdnjs (John-David)
- Modules (John-David)
- npm-stat (John-David)
- Lorn – Ask The Dust (Jamison)
- Rebecca Murphey: Optimizing for Developer Delight (Jamison)
- James Mickens: The Slow Winter (Jamison)
- League of Legends (Chuck)
- Reflector App (Chuck)
Impact.js with Dominic Szablewski
AJ: You know that moment when you had something that you needed on your desk and then you decide to organize and clean your desk and that thing that you need is gone?
AJ: Yeah, me neither.
[Hosting and bandwidth provided by the Blue Box Group. Check them out at BlueBox.net.]
[This episode is sponsored by Component One, makers of Wijmo. If you need stunning UI elements or awesome graphs and charts, then go to Wijmo.com and check them out.]
AJ: Yo, yo, yo. Coming at you live from Provo.
CHUCK: Jamison Dance.
JAMISON: Hey friends.
CHUCK: Joe Eames.
JOE: Hey there.
CHUCK: Merrick Christensen.
MERRICK: Hey guys.
CHUCK: I’m Charles Max Wood from DevChat.TV. I just want to remind you to go get my freelancing video at GoingRogueVideo.com. We also have a special guest this week and that is John-David Dalton.
JOHN: Yarr. Hello.
CHUCK: Yarr, I think that’s a first.
AJ: Very true, very true.
JOHN: Thank you for having me.
CHUCK: Yeah, no problem. So, do you want to introduce yourself?
MERRICK: I can’t picture a better guy for that job.
JAMISON: I didn’t know about that last one. I knew about jsPerf and Lo-Dash and stuff. But I didn’t know you worked with IE. That’s cool.
JOHN: I do. I work with engine guts all day. My bread and butter is doing a lot of benchmarking. So every day, I keep tabs on all that stuff.
JAMISON: Like jsPerf is an awesome tool, but they’re warning against people making decisions about structuring in their code based on running this tiny snippet of code in isolation for a million times in a row.
JOHN: It’s tricky when you get into benchmarks because sometimes you do need a microbenchmark. JsPerf doesn’t help you make a good benchmark. So, there are lots of bad benchmarks out there. And so, the danger of doing microbenchmarks are the fact that engines are pretty smart and they’ll detect dead code and empty loops and things like that. So, you have to be smart about what you’re testing. But more times than not, I’ve found it to be very useful and it actually does reflect if a given snippet of code is faster. There’s a lot of stuff in there that’s goofy though. Like they’ll test doubles equals versus triple equals or single quotes versus double quotes, or goofy stuff.
AJ: But we all know that triple equals is faster, right?
JOHN: [Chuckles] Well, it depends, it depends.
JOHN: If you have the same type values, then double equals should technically be the same, because it performs the same number of steps. Now, engines do all kinds of goofy stuff behind the scenes, but the spec steps are the same.
AJ: I think there’s one question that the audience is going to be really keen to want to know and that is, is it faster to ++1 or to 1++?
JOHN: Oh my, gosh.
JAMISON: [Inaudible] jsPerf, man.
JAMISON: One of the hundred million of them out there.
JOHN: So, one of the things I did with Lo-Dash was to avoid a lot of the micro-optimization myth in the library. So I did things not because they were faster on a jsPerf but because they are my kind of style. For example, I don’t use triple equals everywhere. I use it when it’s necessary. So I use double equals when it’s necessary, or I don’t sit there and use a reverse while loop because it got a better jsPerf score. I use the one that I think is more readable. So I try to push back against that. There are some people that use void(0) because they think it’s faster than undefined and I don’t do that.
A lot of the micro-optimization stuff I just ignore and go for the bigger perf gains, which usually revolves around reducing abstraction. That’s where I get the biggest gains. It helps engines in line better and overall you’ll get better performance. With functional libraries though, a lot of devs like to compose. So the secret is in the library that’s the base, your core low-level library, reduce the abstraction there and you’ll get better performance. So that’s what I do in Lo-Dash.
JOE: So when you avoid the micro-performance gains, why are you doing that? Is it religious belief?
JAMISON: So, to sum up all the stuff about jsPerf, it seems like to use it well you just have to be, it’ll tell you what’s faster, but it doesn’t matter if the while loop is faster than the other loop, unless it’s in a hot loop in your code, right?
JOHN: Right. It gives you all the information you need to know if it’s a good or bad test, or if it’s even relevant for your use case. If something is 80 million versus 20 million, chances are both are going to be very fast. That’s 80 million operations per second versus 20 million operations per second.
JOHN: Chances are, in your everyday code, it’s not going to make a difference. You can weigh that against your actual use case and then decide if it’s relevant for you or not. Also, there’s margin of error that it displays, too. It lets you know if there’s some kind of weird engine issue. For example, if your GC is kicking in or if you had something running in the background that was interfering with your test, you’ll get larger margins of error that pop up and reduce your score or inflate your score. But at least you have a red flag there that says, “Hey, something odd happened.” But yeah, I use the operations per second as a real world sanity check.
JAMISON: We’ve danced around Lo-Dash a lot. Do you want to talk about what specifically it is, in case there’s someone in some cave that hasn’t heard of it?
JOHN: Sure. Low dash is another word for underscore and Underscore is a low-level utility library and Lo-Dash is a fork of Underscore that has become a superset of that utility library. So for those of you that may not know what Underscore is, Underscore was developed about four years ago and took off in a place where things like Prototype.js and MooTools had a foothold. So instead of extending native prototypes with methods, they bolt them onto the Underscore character. So it’s _.each, _.map, _.filter. And as time has gone on, I’ve seen issues crop up where devs needs aren’t being met. For example, inconsistencies in older browsers, inconsistencies in API, backwards compatibility issues popped up, and I had tried to do the open source thing, which is do a pull request, submit issues. Eventually, that did not look like that was going to be enough.
So then I started working on Lo-Dash, with the big thing of ensuring that it’s consistent behavior from your oldest supported browser to your newest supported browser. So one of the things that I try to solve is object iteration and array iteration in older IE is now consistent even with the newest browsers. So no matter what you’re using, you’ll have the same behavior. And it’s really handy for devs that have to support that older environment because the debug tools aren’t as great for those older browsers. And having to dig around your code to spot these inconsistencies which don’t reproduce on newer browsers is a pain and cause devs to cry. So I didn’t want that. I want something that is consistent. So that’s why I created Lo-Dash.
And I can get into the nitty gritty on what the inconsistencies are, but there are blog posts on that and videos on that, too. But it’s basically consistent object iteration, array iteration, for all environments. That’s why I created it. Then it’s just exploded beyond that now, with custom builds and additional methods and modules now.
JAMISON: So, some of the changes that you made in Lo-Dash have actually made it back into Underscore as well, right?
JOHN: I like the days when libraries would compete against each other and try to outdo each other. I remember when Sizzle first came out, so the selector engine that jQuery uses. And Dojo would say, “Oh hey, I’ve got a better selector engine,” and they would come out and then MooTools would go, “Hey wait a minute, no I’ve got a better selector.” And they would each go back and forth outdoing each other. I hoped that when I released Lo-Dash that Underscore would do the same. They would say, “Oh snap, I’ve got to go and up my game here.” But that really didn’t happen. They just sat there. So I’ve started to try to pull them along, because it helps developers.
If Underscore gets better, it helps developers. If Lo-Dash gets better, it helps developers. So I’ve tried to help developers by pulling Underscore up too and submitting issues and communicating with the devs. I’ve got one of the core devs on my instant messenger. I ping him anytime there’s a bug I find or whatever and trying to get Underscore to get better as well. So right now I think we’ve got over 30+ issues that Lo-Dash has fixed in Underscore and are responsible for a couple of the minor version bumps that fixed issues along the way, including 5.1 or 5.2 or something.
JAMISON: That’s really cool. It’s cool that you can be competitive but still help out your “competitor”. It’s not like, “Screw those guys. I’ll grind their faces into dust.”
JOHN: [Chuckles] The reason I forked anyway was to help devs. So being abrasive or toxic doesn’t help devs. No one wins in that scenario. This way at least devs get a better utility lib if it’s Lo-Dash or Underscore. And it’s difficult at times, though, because it is hard to draw the line with [inaudible] right now. I’ve got a competing lib. How do I balance that with helping Underscore out? But I think I’ve done an alright job at that so far. I’ll keep doing it too, until [Chuckles] they don’t want any more updates. But I’ve been diplomatic about it.
CHUCK: I think it’s really funny though. I don’t know of any other projects that have a sort of competing project that actually, they’re pushing ahead of their competitor in this way. There are projects that I know of that solve the same problem. They just solve it in a different way. So when they forked, they’ve effectively said, “Well we like this much about it, but we really feel like this other thing is more important.” So when they fork, they don’t maintain compatibility, they just say, “Use them or use us.” And with you, your API is consistent or mostly consistent (I haven’t really looked) with Underscore and like you’ve said, you basically have this fork of Underscore that you use to push it forward. And I just really haven’t seen that approach in any other projects to push them forward like that.
JOHN: Well, it’s in a really cool position because our API is so similar. And it really is a true fork of Underscore. If you go back through the commit history in GitHub, it eventually becomes Underscore. But we’re at the point now to where it’s not a drop-in replacement. That’s why I maintain the Underscore compatibility build. So that’s always there. But yeah, I’d like to see where this goes. I hope that they pick it up and push back a little bit with some features and stuff, because that’s fun to me. I like the competition. I like trying to outdo someone. Right now, it’s one-sided. I’ve stacked the features against them. So now they’ve got to push back and hopefully make a better lib up.
JAMISON: I wanted to talk a little bit about the performance stuff. You said the reason you made Lo-Dash was because of some of the compatibility issues with older browsers.
JAMISON: We use Lo-Dash at work instead of Underscore and we switched to it because it’s faster. That was the buzz that we heard about and we tested it out and it made a difference. So can you talk about the performance? It seems like that’s as big of a reason for people to use it as the compatibility [thing].
JOHN: Sure. One of the things with Lo-Dash is that you can have your cake and eat it too. If you like small files, we’ve got modules. If you like performance, we’ve got the speed for you. If you like consistency or older browser support, we’ve got that too. But with the performance, that came out of trying to create a pull request for Underscore that met Jeremy’s needs. Basically, the restriction was he would put in cross-browser consistency for older browser support if it did not hurt performance.
So, that was my challenge, was how do I add more bug fixes and increase performance? So I did that by reducing abstractions. Basically, you’ll have a method in Underscore that calls each and then maybe has and then a few other methods, all in a loop. So your call stack just keeps building up and up and up there and I’ve tried to reduce the abstraction there to where a lot of things are just a simple loop or I avoid things that engines have problems optimizing away, like .call() and .apply() in a loop. I hoist those out.
JAMISON: That is so cool. That sounds really cool. Does that catch, is that for sanity checking? Like, “Oh, I’ve changed something and now the minified version is a lot slower.” Or is it just for doing incremental improvements?
JOHN: That was actually a happy surprise. A lot of devs don’t know, or may not think that it’s obvious that minified code dramatically changes your code. The idea with minified code is that it’s not supposed to change your performance profile. But every once in a while, I’ve seen bugs creep into uglified.js or closure compiler that trigger de-optimizations in certain engines.
So actually, in my build step, I undo some of the bad patterns that are injected into the minified code. So the minified code that I produce doesn’t have those slow-downs in them. And I wouldn’t have found that if I didn’t have the minified code, be able to benchmark that and compare that in my suite. Actually, that was an accident. I tried it out and I caught the performance regression and I was really surprised be that. So now it’s one of my things I do before every release, is just to keep a check on that to make sure I’m not regressing there.
But yeah, I do that with performance and unit testing. I’ve got it to where I can test, with one unit test file, I can test legacy modules, AMD modules, CommonJS, Node modules, npm modules, the legacy build, the modern build, the mobile build, the strict build, all these builds, in multiple environments. So in Node, in Ringo, in Rhino, in Narwhal, in Rhino with the require flag. And I dig that. I like being able to cover all my bases with a performance suite and a unit test suite that just works everywhere.
CHUCK: So, what do you use to actually do those tests? Do you have your own performance testing library or do you tack onto the top of QUnit or Jasmine or whatever?
JOHN: I use QUnit, but I have a library called QUnit CLIB, which is the CLI boilerplate. That allows me to run on all these other environments. Like for Rhino, it adds setTimeout and setInterval and clearTimeout and clearInterval, because that doesn’t exist in Rhino. So that allows me to run across all these environments. I used QUnit because that’s what I had at the time. It’s not because it’s the best for a given framework. I’ve sunk into it and it works in all my environments, so I’m using it. I use QUnit with a Jasmine style. I liked the structure that Jasmine gave me when I worked for uxebu. So I’ve taken that and applied that to QUnit. So I write QUnit in a Jasmine style. That’s how I handle that.
JAMISON: So, I have some questions about performance optimization in general.
JAMISON: You’ve built a career on that. How do you go about making something faster in the abstract? It sounds like, you just reeled off a ton of topics when you talked about how you made it faster, but say you don’t know all that stuff and you’re just looking at some code trying to speed it up. What do you do?
That’s where jsPerf would come into, because then you say, “Okay, I know this function is hot. I know something’s going on here. Let me compare different techniques of implementing that function or that snippet of code.” And then you can use jsPerf to say, “Okay, I’ve reduced the function calls here and now it’s better,” or, “I’ve reduced .call(), .apply() and now it’s better,” or, “I’ve hoisted things out of a loop and now it’s better.” So that’s where I would get the indication that there is an issue, is to start off with profiling and then narrow it down with jsPerf or any other benchmark utility you can use. I know devs use console.time even. You could even do that if you need.
JAMISON: I hear the disgust in your voice for that.
JOHN: I’ve tried not to be one of these devs that are like, “My way or the highway,” because it doesn’t help devs out there. That’s why I’m big into, with Lo-Dash, we’re not prescriptive about how you use it or how you ingest the methods. One of my biggest boosts to Lo-Dash’s popularity was to enable AMD support. Turns out, AMD has a [prolific] following and–
CHUCK: That’s why Merrick got the tissues.
And the same thing with benchmarking too. If you don’t like jsPerf and you like your own lib, as long as you’re profiling and you’re actually looking at perf and keeping a tab on it, that’s a win. I would love if more libraries shipped with performance libs, or performance suites of their library just to keep tabs on it, just to know if there’s going to be a regression. Even if it’s not keeping tabs on the competition, just keeping tabs from release to release, which is what I use it for.
JAMISON: Sure. I feel like I see lots of specific performance information about individual features in some libraries, but it’s never like, “Overall this is the speed up of the whole thing.” It’s, “We sped up this specific operation.”
JOHN: Right, yeah.
JAMISON: But I really like one of the things you said about looking. So in Lo-Dash, you said you know which functions are used more so you spend more time on that. I think that can even apply to applications too. Who cares if this chunk of the site that no one ever uses gets a hundred times faster? If no one uses it, that doesn’t make anyone’s life better. I haven’t thought about that before.
JOHN: That’s why I spend, one of the first methods I optimized was the each method because that’s used everywhere and I wanted to make sure that I didn’t regress there. But then I started optimizing all the other methods surrounding it, because there are very targeted methods that are very popular for certain kinds of data manipulation. So the difference, unique, and methods like that I’ve gone ahead and optimized too. But I’ve optimized them for their edge use cases which are incredibly large data sets, really large arrays. I’ve made sure that these methods perform well with large data, which is interesting. Now, I’ve optimized the common areas and now I can go off and optimize the edge behavior as well.
CHUCK: So, I have to ask. How do you know which ones are the ones that people use the most? Are you using a highly technical technique like talking to people or do you have some other way of knowing that?
JOHN: [Chuckles] No, I get it from a couple of different pieces. But you’ll see which methods are being benchmarked on jsPerf. So I can keep tabs on that and see, “Okay, I see a lot of people doing array iteration or object iteration or DOM selection or something, different aspects of that.” Then you get the feeling if you do a code search, you’ll see how many times these methods keep popping up over and over again. Then you do, you talk to devs and you can figure out that, “Oh hey, I’ll do a sanity check every once in a while and say hey, what are your favorite APIs?”
I did this on Twitter a couple of times too. I said, “Hey, just shoot me your most commonly used Lo-Dash methods.” And then I’d get a bunch of replies and then I’ll write. I’d start tallying up which methods people are interested in and then make sure that I keep those fast. Also for me, it helps with, I do have the competition of Underscore. So I can make sure that I don’t regress certain methods they are fast that. So I make sure that I’m always on top of some of their methods. Sometimes I can’t be. But generally I try to makes sure I beat them on performance.
MERRICK: I’ve got a question for you. There are some cases where you’ve actually added features to Lo-Dash that don’t exist in Underscore. Curry comes to mind. Are you worried at all that Underscore might implement curry but use a different signature, in which case you’re in trouble?
JOHN: [Chuckles] So, now that we have the Underscore compatibility build, that really doesn’t bother me as much because we can always keep compatibility there. So right now, Lo-Dash is not a 100% drop-in replacement unless you use the compatible build because we do things like we allow you to exit early out of each. If you return false, it exits early just like jQuery. Our clone method, shallow clone method, clones data objects and regexes, which is something that Underscore doesn’t. So there are lots of little differences there. So if they were to do that, I would just make sure that the Underscore compat build was compatible with that.
MERRICK: Got it. Very cool.
JOHN: Yeah. Now, I try to encourage a conflict-free API. So at one point, the at method was being discussed to be added to Underscore. And you know, we have an at method in Lo-Dash. So I made sure that that could be imported. So it’s not like I would bring it on, say, “Hey yeah, bring in all the API conflicts,” because that makes it harder for devs that are trying to build libraries that work with both Lo-Dash and Underscore. So I’ve made some patches to Lo-Dash to support common patterns that Underscore devs use because I’ve seen some libraries say, “Well hey, we want to be able to just say use Lo-Dash or Underscore and things will work.” So the more API conflict you have, the harder that is to do.
MERRICK: Got it.
CHUCK: I know this is a hard question to answer, but do you have any idea of how many people are actually out there using Lo-Dash?
JOHN: I keep tabs on the npm stats, because I found this really nice site that allows me to do that. It’s going to be one of my picks for today. It’s called npm-stat. And I can see there how many people are using it. So currently, Lo-Dash is downloaded on npm about once every two seconds. And by December, we’ll have over a million downloads a month. So I think it’s starting to pick up. And I can look at the trends from month to month and see that it’s basically just a vertical line on adoptions. We’re gaining. Every month, we’re gaining more and more users. So I think it’s just getting devs aware that there is an alternative that is faster, more consistent, has more features. So getting the word out has been the only obstacle there.
Usually, if there’s been an issue where a dev can say, “I like this because Underscore does this better,” I’ve made sure that we cover that in Lo-Dash and make sure that we do it better or we provide an option for them in Lo-Dash. So I think from that side, we are gaining popularity. We just beat Underscore actually, on the daily downloads, on Monday and then we beat them again yesterday, too. So we’re starting to get to that point where we’re starting to pass Underscore in its own turf of npm.
AJ: That’s cool.
JOHN: Yeah. I’m really, really excited about that too. It surprised me how Lo-Dash went from being in the background to just skyrocketing in use. I don’t know if that’s because the stats got better or what.
MERRICK: I think the adoption of Grunt and some of those other things really helped.
JOHN: Which is interesting, because Grunt is still using a pre-1.0 version of Lo-Dash. So that’s going to be a big win whenever they can get to the point to upgrade that, too.
MERRICK: Yeah. [Chuckles]
JOHN: But yeah, I can’t believe the adoption. I’m very thankful for the devs that have taken a chance on it, because a lot of devs, they see if they don’t dig into what Lo-Dash is, they think it’s like a [me too] library where it’s just a fork and all it’s got is performance and they don’t dig into the consistency or the features or the modules or the custom builds or the documentation. So that’s why I’ve started to try to shift the focus away from performance because performance is great for getting the word out initially but it’s so much more than that now that I have started to drop it from the keywords and started to stress it less. Because it really is about the consistency and features and the performance is just the nice-to-have on top.
MERRICK: Yeah. The deep clone, for example, is just something that’s so useful that I don’t think you can even get from Underscore, honestly.
JOHN: And a lot of these features I get because they were asked in Underscore’s issues and then closed over and over and over again. So devs would continually open these issues saying, “Hey, what about this feature?” and then it would get closed. Then another dev would say, “Hey, I really like this, I want this feature.” So that’s been my road map for Lo-Dash, is to see what are the features that devs are asking for but not getting in Underscore, and then implement them in Lo-Dash.
AJ: So do you have a hook into the GitHub API where every time somebody opens an issue on Underscore, you just get shot in the back of the head with a Nerf Dart?
JOHN: No. I manually check up on it. Every day, I check on it at least once a day just to see where it’s at.
AJ: It’s definitely not as cool as having a GitHub hook.
JOHN: No, it’s not [as cool as having a GitHub hook].
JOHN: But I do, I check in on it just to see what issues are being opened. I also check Stack Overflow and a few other places just to see what the questions out there are. And what I’ve started to see now is that devs are starting to refer to Underscore as Underscore/Lo-Dash. It’s becoming one name, Underscore/Lo-Dash.
JAMISON: [Chuckles] GNU/Linux.
JOHN: Yeah. I’ve also started seeing them say Underscore but use Lo-Dash syntax for things, which is a little confusing. [Chuckles]
JOE: Wait, what do you mean syntax difference?
JOHN: So, we have intuitive chaining in Lo-Dash, which means you don’t have to call the chain function to get chaining. If you use the Lo-Dash function and you pass it a collection or a value, then it just assumes you want chain. So if you use Lo-Dash like jQuery, it’s going to assume you want chain, because why else would you be using it like that? So I’ve seen devs use that syntax and refer to Lo-Dash/Underscore, even though that syntax will not work in Underscore. It’s one of the features we have.
JAMISON: It’s really interesting hearing you talk about this competition, because there’s so much competition, for example in browser MVC libraries. But it seems like they don’t, maybe because they’re coming at it from such different places that they all have really different strategies. So they don’t maybe motivate each other and learn from each other as much as Underscore and Lo-Dash, just because you guys directly forked. Is there any general lesson you think that library implementers can learn on how to deal with competition, even if they’re not between API compatible things? Does that make sense?
JOHN: Yeah. I think competition is great. And I really get into it. And I like trying to one-up. I treat it like a challenge where it’s like, “Oh yeah, you can’t get it as fast with the cross-browser support? I’ll show you. I’ll get it faster.” And then you work at it and work at it and you get it. Or they’ll say, “Well, you have this but now your build is three times as big,” and I’ll say, “Oh yeah, well I’ve got custom builds now, so waah.”
JOHN: I think if you have fun with it and you enjoy it and you don’t see them as this, I don’t know how devs would get it in their head that
JOHN: Yeah, arch-nemesis, right.
CHUCK: I love it.
JOHN: It’s all things used to help devs. So I’m building a library to help devs and getting into super big arguments about that kind of stuff doesn’t really help anyone. It just feeds the trolls. So I try to stay positive about it. Every once in a while, I’ll see people come across my Twitter feed and say, “Wow, you’re really trolling Underscore,” or something. And I think it’s because I’m the cheerleader for my lib that it may sometimes seem like that, but I’ve never called them names or said that they were stupid or something like that.
I’ve always come at it from a dev perspective. Is this helping devs? Is this hurting devs? Is this going to allow it to be used in more environments or something like that? So I try to keep it positive. Also, I think that knowing if you make a mistake that to just own up to the mistake and fix it fast, that’s what I do. If someone finds a bug or if someone calls me out on something, I don’t sit there and showboat about it or try to stand on my
JOHN: With this thing, I’m always thinking about the semicolon issue in Bootstrap or something.
JAMISON: Aren’t we all?
JOHN: Yeah, well [Chuckles].
AJ: We all know it’s faster to use semicolons.
JOHN: With that, how I would have ended that was adding the semicolons helps devs because some minifiers broke. So for me, I would say that falls into my is it better to help devs or to say “No, change your code.” And I’d say no, it’s better to help devs.
JOHN: So, I think if you fix your mistakes fast and are willing to adapt, then you’ll do well. I’ve learned from Underscore, too. There are a couple of things that they’ve done where I’ve gone, “Wow, that’s crazy cool.” Like with their zip method. At one point, they had unzip which was using zip internally to unzip and that blew my mind. So now, I’ve done this too. Yeah, I think you can always learn from them.
I’ve also looked at other competition too. There’s a library called Mout and there are other utility libs out there. So I look at them to see what cool things I can implement. For example, on my roadmap is lazy evaluation, because someone forked Underscore and Lo-Dash and created their own lib called Lazy.js where they use deferred evaluation of chained methods to get performance gains. Instead of doing map().map().filter() where it’s 200 iterations, 200 iterations, and then 200 iterations to filter down to a collection of two, they defer all of that and compose it in a way to where you reduce the iteration count and get significantly faster performance.
So, on my roadmap is to adopt that kind of technique for the chaining style. Because one, it’s they boast as being faster than Lo-Dash [Chuckles]. So my competitive side is like, “Okay, I’ve got to address that need there.” With Mout, they were the module lib so I made sure I covered that case with Lo-Dash as well. Now you can get Lo-Dash spliced up into AMD modules or npm packages or Node modules and all in one repo. I think a lot of the, what is it, the NIH, not invented here, you’ve got to just let that go. There are other libs out there that do things really, really well. And now with code becoming modular, you can just tap into little bits of these libs that do one thing and do one thing well.
AJ: So, I like how you said that, this was a little bit earlier, but you said how you optimized for edge cases where the edge case is a really large array. And I think that’s really cool because a lot of people, I don’t know, if you tell me you’re optimizing for an edge case, I’m thinking, “Why would you do that?” But it makes sense because it turns out that bogosort is just as fast as any other sort until you hit about 7 items and then it starts to get really slow. So if you have little–
JOHN: What I noticed is that I started seeing conference talks about large data and then issues being reported about performance issues with large data. And I’ve always looked at jsPerf again. It’s my finger in the air where I can say what is the current [trend] on performance issues? So there were a lot of performance tests being created for large data. So I figured I can optimize this. I can get this to where we do it really, really well.
So, if you do happen to work with large data or if you’re using D3 and Lo-Dash, you can iterate over these collections faster or do a unique or other operation speedy with large data. Basically what we do is we detect, I profile to detect when the large array optimization would actually benefit users and that’s when it kicks in. Across multiple browsers, at a given number, that’s when I kick in the large array optimization there.
JOHN: Then you would profile your code and see which methods are being hot. Someone says that we already asked that. I think you did. But yeah, I would just say profile your code. Figure out what methods are hot and then go from there.
CHUCK: But I guess I’m looking for more specifics.
AJ: I would say [inaudible].
CHUCK: I guess I’m looking for more specifics. So do you plug in jsPerf at that point and start looking at what it’s telling you?
JOHN: No, you use the profile data to figure out what methods are hot and then you take those and you can break them down from there. JsPerf would be used for if you wanted to compare different implementations of, say, a slow method. For example, I’ve seen them do this with, I’ll just say Backbone’s event triggering implementation. They compared the versions of its performance from one release to the next to a future patch to improve performance. So they know it’s slower so then they’ll compare different implementations of it to see which one is fastest. A lot of times when you present, “Hey, I’m going to send a pull request to this lib to make something faster,” they’ll say, “Hey, give me a jsPerf of that to validate what you’re doing is actually going to make a significant difference.” It’s like a sanity check in that way.
JAMISON: So, jsPerf can tell you how long something takes. It’s good for small chunks of code when you’re trying to determine the difference between them.
JAMISON: It’s not like, “Find me what is slow about this code,” it just tells you how long it takes.
JOHN: No. It just basically tells you which snippet is faster, snippet A or snippet B or snippet C. And that’s all it does. So to narrow it down, you want to use your browser dev tools to profile, to figure out where your hot spots are.
JAMISON: So, you have mentioned some specific tricks when we talked about performance before, like pulling things out of loops or not using apply and call or not using some natives. How do you develop that knowledge? Do you just need to learn about the different engine implementations and what things are slow? Is there a good resource for learning all that stuff?
But for devs looking for resources, I’m not sure where a good list of all of these things are. That’s what I do with my JSConf talks. It’s been, “Hey, here are some things you can do to speed up your code.” That’s what my last JSConf talk was, where I went through several of these techniques that you can use to speed up your code. And I’m actually using all of them in Lo-Dash. So I guess I would redirect to my JSConf talk for this year.
JAMISON: Going to throw a link into Skype in there? I guess we can find it later.
JOHN: Yeah, I’ll find it later. My slides are really bare, so I also released a screencast with it that walks you through the optimizations too. So I’m still waiting for the official JSConf video to be published, but I’ve had the unofficial one out almost since I got back from JSConf. So that’s up too, so I’ll shoot a link there. It basically covers just a lot of things that you can do in your code. It doesn’t [even have to be to] [inaudible] code. These are techniques that you can do with any of your production code or your code that will give you better performance.
MERRICK: I wanted to go a different direction and that was to ask about these environment-specific builds. Do you have plans for, or does it exist already, maybe a Lo-Dash that will wrap streams? Something more Node specific?
JOHN: No, actually, that hasn’t been brought up. So far we’ve just done builds for older environments and newer environments or mobile. But I haven’t, besides having the npm build and the Lo-Dash Node build which targets Node specifically, I haven’t done any other kinds of builds around that.
MERRICK: Got it.
JOHN: I’d be interested in an issue though, or a feature request. I’m very flexible with adding features especially because I do have a custom build. I’m not as locked into being torn about adding a specific API. Because if you don’t like it, you can always create a custom build or use the module build or one of the, what is it, hundreds of variations on the build to create something that you’ll dig.
MERRICK: Yeah, for sure.
CHUCK: One question that I have about performance, you talked a little bit about some of these mythical optimizations that you can make. How many of the things that we hear on a regular basis are provable to improve performance versus the ones that aren’t?
JOHN: A lot of it falls down to use case, but I’ll tell you the ones that I’ve avoided. I’ve avoided void(0). I’ve avoided triple equals. I’ve avoided reverse while loops or any kind of for loop magic where they say, “No, it’s this way.” ++i versus i++, I’ve avoided that kind of thing. Really what it comes down to is, for me, the biggest improvement is just reducing your function calls and getting that out. I’ve seen some other ones too where they’ll say string concatenation versus using an array and join. But a lot of those benchmarks are missing the fact that engines nowadays will defer the string concatenation until the full string is evaluated.
So, these benchmarks, these jsPerfs are testing the wrong thing. They’re not really showing you what the actual performance is going to be from using string concatenation. And I think it’s cool that the engines do this in the background, too. Basically when you’re doing all these concats, it doesn’t create the final string until you do something like a regex that iterates over the entire string. Until then, it’s separate string snippets. And when you use array.join, that forces the flattening of the string right there because you’re creating the string with join.
JOHN: Also, things like a switch statement versus if-else, if-else. I found that the switch statement really helps for mobile, like Safari mobile specifically. But other than that, it’s really not a big deal. I found some very specific Safari mobile optimizations. So for a while, I avoided Object.keys in Safari mobile because it was slower. But that was just the work of profiling. But it made enough. It was 30% slower. So it wasn’t just micro-optimization level slower for me at that point. But then again, you jsPerf it and you see, is this millions or is this hundreds of thousands or thousands or hundreds of operations per second difference. And you can make a judgment there. I would say use the ops per second measurement on jsPerf to give you a sanity check on if this is really going to matter.
JAMISON: I think another thing that you have talked about a little bit but haven’t just come out and said is that these optimizations often make your code uglier, harder to follow.
JOHN: That’s true. They do make it harder to read. So that’s why there’s this balance I do with Lo-Dash to try to figure out, is the performance gain worth devs going WTF over the code? That’s what I did with method compilation. In the beginning, I compiled all the things. And then I got devs that actually would not use Lo-Dash because of method compilation because it was too cryptic and hard to read. They couldn’t grok the source and then trust the source. So I pulled that back and in doing so I probably lost a little bit on performance but it wasn’t enough to make a big difference. So I increased dev readability and still found a balance with perf. So it’s a balancing act, I’d say. Only do something super cryptic if you know it’s going to really help.
So, Backbone does something in their event-emitting code where they have if you pass arguments length of four, do this, if you passed arguments length of five, do this, or six or seven or eight. And it’s this really unrolled logic. And I’d say only do that kind of crazy stuff if you can show really big perf wins and comment the heck out of it. So even in my method compilation, I added a ton of comments to it just so devs wouldn’t be as intimidated to look at the given chunk of code.
JAMISON: Wise words.
CHUCK: Alright. Well, any other thoughts or questions before we get into the picks then?
AJ: I have none.
JAMISON: I’m fresh out.
MERRICK: I have some feature requests.
JOHN: Sure, yeah.
MERRICK: But I can make those on GitHub.
JOHN: Oh, okay. Cool.
JAMISON: We’ll put him on the spot.
CHUCK: There we go. How would you implement, I’m just kidding.
JOHN: [Groans] I don’t do live coding.
JOHN: I just saw on the chat that they said async.
MERRICK: Yeah, I want async. [Chuckles]
JOHN: I think there is an async library for Node. And it’s fantastic. So I would say they are probably well-suited for that need.
MERRICK: Oh, I’m sorry. When I said async, I meant sometimes when you’re doing really large DOM operations, you have to chunk them on the next turn of the event loop. So Lazy.js does this really cool thing where you can put in an async call and then a take. So it’ll call that function every 50 or whatever times until it’s out.
JOHN: Oh, snap.
MERRICK: Yeah, it’s pretty cool.
JOHN: Nice. Alright. I’ll look at that then. Like I said, the next step for me to compete with Lazy would be the deferred evaluation on the chaining syntax.
CHUCK: Alright. Well, let’s go ahead and do the picks. AJ, you want to start us off?
AJ: Yeah. So first of all, I’m going to pick John-David Dalton and I’m going to dedicate this song to him.
MERRICK: Please don’t.
AJ: [Singing] Did you ever know that you’re my hero?
JOHN: Wow, that’s happening.
JOE: Where’s the edit button?
CHUCK: Oh, no kidding.
AJ: Hey, it wasn’t that bad. I was on [inaudible].
JOHN: No, it wasn’t. It was alright.
JOE: Do we have the same bleep that you get for live TV swearing?
AJ: So, other things. With my DJ company, I put on a ball and it was really cool because if you’ve ever seen the greatest movie ever sold, this was like the greatest ball ever sold. So I went around to all these other companies that are in the wedding reception industry or in the young single adult industry and got them to sponsor the entirety of it. So I want to give a shout-out to a couple of the companies that helped me out that are nationwide, because most of them are just local here in Provo. Home Depot. They were really cool because they let us use a generator for free, which we needed for a food truck. Sam’s Club actually provided a lot of the utensils and whatnot. And then Carrabba’s gave us a date night package to give away to the winners of one of the contests. Actually, Sizzler did that too.
Anyway, so I’m super happy that there were really cool managers that were into putting on a free event for young single adults in this area and making something that was fun and classy. Because one of my missions with my DJ business is to make more classy stuff. Because things just aren’t classy enough anymore. Anyway, end of [rant].
CHUCK: Alright. Joe, what are your picks?
JOE: Alright. I’ve got three picks here. The first one is the new TV show that just started up, The Michael J. Fox Show. I really enjoyed watching it. It’s pretty funny. It’s not quite as funny as Big Bang Theory, but it’s still quite enjoyable to watch. I’m also going to pick Skinit.com. Skinit is a company that produces vinyl sticky skins for all kinds of devices. I bought one for my first MacBook Pro. I just bought another one for my MacBook to give it a little personality. I know that the rage is for everybody to throw stickers on their MacBook as if they’re a racecar driver.
CHUCK: I’ve never heard it said that way, but yes.
AJ: I really [inaudible], yeah!
MERRICK: We should really explore sponsored open source.
JOE: Right, right.
MERRICK: John-David Dalton could probably get some money putting stickers on his laptop before he speaks.
JOE: Yeah, probably. Probably. But I think something that looks a little bit nicer and represents some part of your personality versus just selling out to the corporate machine
JOE: No, I have no problem with those stickers but I really like Skinit. I think it really made my MacBook look nice and everybody knew which MacBook was mine at work. So Skinit.com is my second pick.
My last pick is going to be Nodist. Nodist is basically nvm or N for Windows. So it allows you to run multiple versions of Node and switch between versions which is an absolute necessary because Node and other libraries love to break each other. So being able to switch to a version of Node that actually works is just absolutely necessary. So I’m going to pick Nodist. I think it’s a great project somebody put up together so that Windows people get the same love that non-Windows people get.
MERRICK: Which project did you pick? Nvm or Node? What was it?
MERRICK: Do you know if N from TJ Holowaychuk works on Windows?
JOE: It does not.
JOE: Yeah. So Nodist.
MERRICK: Good to know.
JOHN: What about the one from Isaac? Natty I think?
JOE: I don’t know.
JOHN: I mispronounce. But he’s got one too, so I’ve switched from N to that one.
MERRICK: I’ll probably switch to that one, just because the nature of him feeling responsible for Node.
JOE: Yeah, I don’t know. I just did a Google search with reasonable search criteria and that was what came up.
MERRICK: He says in his README it will probably never work on Windows. [Chuckles]
JOE: Right. Okay, so those are my picks.
CHUCK: Awesome. Merrick, what are your picks?
The second pick I have is actually this. Chrome has behind a flag introduced a dialog element, which I don’t know if this is good or bad. But it’s interesting. Having these kinds of high-level UI elements implemented by browsers. I don’t know if they’re going to go the way of alert where nobody ever uses them or if people find a way to extend and use these elements. The APIs are a little bit wonky right now. For example, they’re using show and close instead of show and hide. So I wanted to try and direct attention at some of these implementations to get people to be able to give these guys more feedback because it would really suck to have a poorly implemented dialog that no one could use.
And the other pick is 1Password. They released an update. It’s 1Password 4 now and it’s just terrific software for managing all your passwords.
JOHN: I use it.
MERRICK: Yes, I love it.
CHUCK: Yeah, you have to have something like that anymore these days.
JOHN: It’s created some interesting conversation because I’ll say, “Hey, I use 1Password,” and then someone will say it’s not a good idea to use just one password for everything.
JOHN: I’m talking about the software.
CHUCK: No way, really?
CHUCK: That’s funny.
MERRICK: So true.
JOHN: Am I up?
CHUCK: Sure, go ahead.
JOHN: Alright, cool. My pick would be, one of them is CDNJS. The maintainers are fantastic. They respond to issues and feature requests. They’re working to get Zopfli support so you can get even smaller CDN-ed files. And every time I release, I always do a pull request to them to update my libs on the CDN. So that’s one.
The other pick would be modules of any kind. So AMD, ES6 or CommonJS or the Node style modules. So if you’re digging Browserify, go for it. Now is the time, if you’re going to do it, to use modules. And no matter what kind you like, go for it. There are always cross-compilers to go from one format to the other format to back to raw JS too.
MERRICK: What do you prefer?
JOHN: Oh boy. So, I took three months to add module support to Lo-Dash and so I got to learn a lot about the issues with AMD but it was really issues with circular dependencies in my own code. So I’m going to stay out of that fight and just say I support all modules.
MERRICK: Tell me in chat what you would do if you start a new project. I’m just so curious.
JOHN: I have UMD. That’s the first thing I add to my lib, is the UMD support so it works across multiple kinds of module loaders. In fact, in the last release or so I’ve even beefed that up a little bit to support even more wider ranges of shims that people are using to support these various module loaders. So I would say go UMD if you can and then create targeted builds. But if you do choose one, just look at having the cross-compiler there for the other format. Because again, if you’re choosing one, especially if you’re a lib dev, if you’re choosing one to hang your hat on, you’re closing the door to a lot of other developers.
MERRICK: Yeah, I absolutely agree. I totally agree.
JOHN: The last pick I had is for that npm stat site. So I’ll post a link to that.
CHUCK: Awesome. Alright, Jamison, what are your picks?
JAMISON: I have three picks today. One is music. It’s an album called Ask the Dust by Lorn. It’s organic bubbly electronic music. It’s like dubstep without the obnoxious parts of dubstep.
MERRICK: You better not be talking about the drops, because those are the best part.
JAMISON: Oh my, gosh.
JAMISON: You and Skrillex sitting in a tree. Okay so that’s one of my picks. You should check it out. I’ll leave it up to you to decide whether it’s obnoxious or not, but I like it and I don’t like the really screechy grinding gears dubstep.
The next pick is a JSConf talk called Optimizing for Developer Delight by Rebecca Murphey, I think. And it’s just about how it’s some things an engineer on a team can do to make life better for everyone on the team. Some of it’s tooling and documentation and how to communicate knowledge and how to eliminate snags that pop up. That was a really good talk.
And the last one is this column in some USENIX magazine that I’ve never heard of called The Slow Winter. But it’s basically this satire about, oh man it sounds so dumb when I say this, I feel stupid. But it’s amazing. It’s this satire about performance optimization. It’s just this guy writing. It’s like Lewis Carroll was a hardware engineer and wrote the Alice in Wonderland of branch prediction. It’s amazing.
JAMISON: It’s two pages long and it made me laugh out loud. It’s pretty good. So those are my picks.
CHUCK: Awesome. Alright well I’ve got a couple of picks. Actually, probably just one. No, I’ve got two. So the first one is, I finished Portal 2 and I was looking for another game to start playing and a lot of people were telling me to go check out League of Legends. And I’m enjoying it. I’m still very new at it. So I very suck at it. But I’m enjoying it.
JAMISON: Oh, you’re playing the internet’s best cyber bullying simulator.
MERRICK: It is such a bullying game but I started playing it and my self-esteem is an all-time low.
MERRICK: I almost didn’t make it to the show because I was being bullied so hard last night.
CHUCK: Oh, really?
MERRICK: Oh, dude. They have this whole tribunal review system because the community is just so bad.
CHUCK: Oh, wow.
JAMISON: We’ll have to talk about this after. I’ve got stuff to say about that game.
CHUCK: Yeah, I haven’t had any issues yet, of course. I tell people I’m new and then no one will talk to me. Maybe that’s it. But anyway, just getting in and running around and killing stuff is awesome. That’s one pick.
The other pick is Reflector App. It’s an app that allows you to do AirPlay from your iPhone or iPad to your Mac. You can do AirPlay to the Apple TV and stuff and now they’ve got the AirDrop so you can do that to other iPhones and iPads. But it’s really awesome to be able to do AirPlay to my Mac. So then if I’m recording a screencast, then I can run through it and I can say, “Hey, here’s how I use this on my iPhone.” In fact, I did that yesterday. I recorded a five-minute video on how I use Evernote for my mastermind group and part of that was, “Hey, here’s how I use it and here’s why I really like it. It’s because it syncs to my phone and I can do this kind of thing.” Anyway, those are my picks.
Thanks for coming, John. I really appreciate you coming.
MERRICK: Woo! Yeah, thank you.
JOHN: Thanks for having me.
JAMISON: Yeah, this was great.
JOHN: I’ve been wanting to jump on a podcast for a while, ever since I heard you all talk with the MooTools guys.
MERRICK: Oh, awesome.
CHUCK: Yeah, now you’re web famous because you’re on our show, right?
JAMISON: Yeah, hopefully this can lead to great things in your career.
AJ: Yeah, maybe a few people will hear about that project you got. What was it called again?
JOE: Yeah, we will want some props and [inaudible].
JAMISON: Tiny-Dash or something?
JOHN: Yeah, yeah. Tiny-Dash, yeah.
JOE: We’ll want a note in your eulogy.
JOHN: Oh, right.
CHUCK: Alright. Well, thanks for coming. We’ll wrap this up. We’ll catch you all next week.