Tumblelog by Soup.io
Newer posts are loading.
You are at the newest post.
Click here to check if anything new just came in.

February 11 2012

February 08 2012

January 23 2012

January 20 2012

January 15 2012

October 17 2011

Why Chef?

Being a system administrator is alternately thrilling and tedious. The thrilling part is setting up an awesome, rock-solid system. The tedious part is doing that 10 or even 100 times more. Further, throughout your career you will configure certain applications again and again. You know the ones, sendmail, apache, samba, etc. Well, there have been many attempts to build a framework, an abstraction, for automating the process of configuring these well-worn tools and others. Today there is one such framework that finally gets it right, Chef. There are many, many reasons to use Chef but the main reason you should use chef is this: You, System Administrator, are too damn smart to spend the rest of your professional life reinventing the wheel.

Let me tell you a little more about how wonderful Chef is, then I will show you briefly some technical highlights, and finally I will clear up some Chef myths.

The Top 5 Reasons to use Chef

  1. Writing reams of documentation sucks. Chef drastically reduces the amount of documentation you have to write.
  2. Bash doesn't scale. Seriously.
  3. Technical Awesomeness
  4. Chef grows with you
  5. You can stop reinventing the wheel

Write Less Documentation


basic sendmail configuration with chef

Bash Doesn't Scale

Bash is a wonderful thing, but like all UNIX tools, it is fundamentally limited by design. Bash doesn't have a code reuse mechanism more powerful than functions. It uses a single global namespace. These and other limitations have made it hard for us sysadmins to reuse and generalize our shell scripts across different distributions, let alone different versions of *nix. Lastly, it is quite difficult to write shell scripts that are idempotent, that is, they have the same effect whether run once or 100 times. This is particularly true when manipulating configuration files. Let's take a look at how you would configure /etc/sudoers with chef.


All the variables in this template are passed in from this recipe. If you think you can replicate this template using sed, please submit it as a comment so we can compare results.

Last but not least, Bash can be just as unreadable and obfuscated as the darkest perl code. With Chef you have a high level, intuitive DSL for describing your configuration. Remember what I said about needing less documentation?


Technical Awesomeness

NOSQL FTW

One of the virtues that many *nix tools share is that they store their configurations in text files rather than binary formats or in a database. Chef stores your system configurations in text and in a database. It accomplishes this by using the document-oriented database, CouchDB. This makes the configuration  searchable and higher performance. One of my annoyances with nagios is that I have to restart it every time I change any service or host.  There are no annoying restarts with Chef. Each Configuration change you make takes effect instantly. Further you can use your favorite text editor and the command line to make configuration changes. Finally, using a database means that Chef is easily accessed through a GUI. We sysadmins often look down on GUIs as crutches for the Windows crowd. The Chef web UI is excellent for visualizing your own infrastructure, something that is hard to do while staring at plain text.

Knowing is Half the Battle

Chef uses Ohai to collect data about your system. Your recipes can access these attributes and make decisions based on them. For example, you can determine which version of Red Hat you are using simply by looking up the value of node['platform_version']. You don't have to cat | grep | awk to find out which release you are on.

Ohai makes cookbooks more dynamic and able to support different distributions. As we will see later, this is one of the reasons there are so many high-quality cookbooks available.

Search

Search is a feature in Chef Server that allows you to query the configuration information of all other servers and of globally-defined databags. This allows you to do things like configure clusters where a member of cluster needs to know not only about its own configuration but about the configurations of the other members of the cluster.

Example of using search with data bags. For the full recipe go here.

Knife
Knife is one of the truly great command line tools. It is your primary mechanism for interacting with the chef-server. Knife shares many usage patterns with git. If you love git, you'll love knife.

shef

shef works the way you work, in an iterative manner. Most of us system administrators are self-taught and we learn best by doing. Fire up shef and you can on the fly play with attributes and create recipes. Further, you can connect to your server and download the cookbooks.


Chef Grows with You

Chef uses pure Ruby as its configuration language, not a shackled subset of ruby, nor yet another custom configuration language. You only have to learn a small amount of ruby to get started with chef. Once you get beyond the basics of Chef, you can go farther with Ruby. Many of you are grumbling now because ruby sucks to compared to Perl/Python/TCL/<insert-interpeted-language-here>. Well, Ruby may pale in comparison to Python but it is still a powerful, full-featured language.

Just like Perl, there is a lot of dark magic inside Ruby. It can't be used carelessly or it will bite you in the ass. Unlike Python, Ruby does not make it difficult to do the wrong thing.


You can stop reinventing the wheel

Until Chef, we sysadmins did not have a truly modular way to abstract and share our system configurations. Please stop reading and browse http://community.opscode.com/cookbooks. Later on, you should look at the code in  https://github.com/opscode/cookbooks. You will discover that some turncoat sysadmins are giving away our trade secrets. Now is the time for you to do the same. In fact, you can rip out a large chunk of your shell scripts and replace them with Chef cookbooks. You will find that many existing recipes meet your requirements and you can easily add new recipes to existing cookbooks for your unmet requirements.

Recently, I replaced a 500 line shell script with 7 cookbooks, 5 reused from community.opscode.com and 2 new ones written from scratch. In the process of replacing the shell script I actually wrote 4 cookbooks that unwittingly duplicated the functionality of existing cookbooks.

The emergence of the cloud-computing, whether on the public cloud such as EC2 or internally with openstack, means that we systems administrators will have to manage at least 10 times more systems. We have to become 10 times more productive. We have to shift from "managing servers" to "managing configurations." Chef is one of key tools to accomplishing this.

One Big Fat Chef Caveat


Chef makes good sysadmins more productive. It does not turn junior sysadmins into experts. In fact, it makes them more dangerous. As I overheard on IRC one day, "Chef is to sysadmins what C++ is to software engineering." This is very true. You can automate system configurations with off-the-shelf cookbooks but you have to understand exactly what they do and test them very rigorously.

Chef Myths

There are a few myths about Chef floating around the intertubes that need to be exploded. The biggest one is that you have to be a professional programmer to use Chef. This is untrue for a couple of reasons, including the assumption that system administrators don't program already. Writing complex shell scripts is programming. Creating Apache rewrite rules is quite close to programming. Further, Chef doesn't require you to be a Rubyist to get started, just as you don't have to be a Bash Hacker to use the command line. Take a look at this tutorial and you will see what I mean. 


Comments

Amazon kindle source code

Source Code Notice

Amazon is pleased to make available to you for download an archive file of the machine readable source code ("Source Code") corresponding to modified software packages used in the Kindle device. By downloading the Source Code, you agree to the following:

AMAZON AND ITS AFFILIATES PROVIDE THE SOURCE CODE TO YOU ON AN "AS IS" BASIS WITHOUT REPRESENTATIONS OR WARRANTIES OF ANY KIND. YOU EXPRESSLY AGREE THAT YOUR USE OF THE SOURCE CODE IS AT YOUR SOLE RISK. TO THE FULL EXTENT PERMISSIBLE BY APPLICABLE LAW, AMAZON AND ITS AFFILIATES DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. AMAZON AND ITS AFFILIATES WILL NOT BE LIABLE FOR ANY DAMAGES OF ANY KIND ARISING FROM THE USE OF THE SOURCE CODE, INCLUDING, BUT NOT LIMITED TO DIRECT, INDIRECT, INCIDENTAL, PUNITIVE, AND CONSEQUENTIAL DAMAGES.

Click on the links below to download an archive file of the Kindle machine readable Source Code:

Kindle
Wi-Fi, 6" E Ink Display

Kindle (Latest Generation, Wi-Fi, 6")
Kindle 3G (Latest Generation, Free 3G + Wi-Fi, 6")

Kindle (2nd Generation, Free 3G)

Kindle (2nd Generation, U.S. Wireless)

Kindle DX (Free 3G)

Kindle DX (Free 3G)

Kindle DX (U.S. Wireless)

Kindle (1st Generation)


Comments

October 14 2011

Dolly Drive: Time Machine in the Cloud

As many of the Mac AppStorm writers will tell you, backup is important! It is the single thing that is protecting you from massive data loss, hours of frustration and lots of hair pulling.

With the advent of Leopard, Apple released a built-in backup utility that makes backup a breeze, called Time Machine. However, Time Machine was developed for local use only. It will backup to a Firewire or USB hard drive plugged directly into your computer as well as a Time Capsule device on your local Wifi network. While that is a very good thing, natural disasters do occur, as does theft and simple hard drive failure that can put your backup at risk. What if you could use Time Machine to backup to the cloud?

Introducing Dolly Drive

Dolly Drive does just that. It enables you to use Time Machine to backup to a cloud service, called Dolly Grid.

Backing up to Dolly Grid

Backing up to Dolly Grid

Backing up using Dolly Drive just requires a small application that changes a few things about your Time Machine settings. Instead of backing up to a local hard disk or Time Capsule on your local network, it creates a backup that is transferred up to the Dolly Grid.

Dolly Drive main window

Dolly Drive main window

Dolly Drive Backup Status

Dolly Drive Backup Status

Now one thing that must be remembered is the slowness that is associated with online backup. Whether you use Dropbox, CrashPlan or Dolly Drive, your backups are going to take a bit longer than they would if they were backing up to a local hard disk. However, the benefits (protection against theft, hard drive failure or natural disaster) often outweigh the downside of slower backup.

Cloning With Dolly Clone

Once you have your time machine backing up to the cloud, what are you going to do with the hard drive that is sitting idle besides your computer? Use it as a local backup of course! With most of your data secured online, it can takes hours to download your data to get going again after a hard drive failure or loss of some kind. Having a local backup as well a cloud backup will help you get up and running again in a matter of minutes instead of hours.

Since Dolly Drive takes up your Time Machine capabilities, (Apple doesn’t allow for two different Time Machine instances to exist on one Mac at the same time) you will need to use a cloning utility instead. Recently, Dolly Drive added cloning capabilities right inside their application under the name “Dolly Clone.”

Dolly Clone, selecting a source

Dolly Clone, selecting a source

Dolly Clone is about as simple as it gets. You pick what you want backed up and then which drive it should be cloned to. Then you can chose to have Dolly Clone wipe the backup destination and start fresh, or have it smartly update the drives to be clones of each other. The latter is done by determining the differences between the two drives and then adjusting the destination drive to match the original.

Pricing Online Backup

Dolly Drive is a subscription service (with Dolly Clone being a free download for everyone). They have a few different plans starting at $5/month for 50GB, going up to $10/month for 250GB and even $55/month for 2TB of storage (there are discounts available if your pay in advance). Each plan comes with an extra 5GB per month that you remain a customer. Since Time Machine backups continuously expand, it’s a great bonus to using Dolly Drive.

The two main competitors to Dolly Drive appear to be CrashPlan and Backblaze. However, these don’t utilize the built-in Time Machine system to backup. They each charge $5/month for unlimited backup. It’s important to note though that restoring from these services generally requires logging onto their website and downloading a .zip file. This is much less fluid than using Time Machine to connect to your Dolly Drive backup and restoring from there.

Conclusion

Dolly Drive for Lion, at the time of writing is still in Beta. There are a few bugs that should be fixed with Lion’s 10.7.2 backup, according to Dolly Drive. However it worked splendidly for me.

It is stuck with the normally slow internet backup problem that all of its competitors also face. With a normal home connection, the Internet isn’t really fast enough to match local backup speeds. While it isn’t Dolly Drive’s fault, it is something to think about if you plan to start backing up terabytes of data.

Because it is using Time Machine to backup, there isn’t a way to access your files on a mobile device or different computer, even if your files are located in the cloud.

Should you start using Dolly Drive for cloud backup? I would say yes if you haven’t ever tried online backup. Being so deeply integrated with the Mac operating system is fantastic. I found their support to be exceptional as well. If you are already backed up with another online backup service, I would be a bit weary. This is mainly due to the amount of time that it would take to get all of your data in the cloud again.

Do you use an online backup service? Have you tried out Dolly Drive? Let us know in the comments!

October 04 2011

Scala on Heroku

The sixth official language on the Heroku polyglot platform is Scala, available in public beta on the Cedar stack starting today.

Scala deftly blends object-oriented programming with functional programming. It offers an approachable syntax for Java and C developers, the power of a functional language like Erlang or Clojure, and the conciseness and programmer-friendliness normally found in scripting languages such as Ruby or Python. It has found traction with big-scale companies like Twitter and Foursquare, plus many others. Perhaps most notably, Scala offers a path forward for Java developers who seek a more modern programming language.

More on those points in a moment. But first, let's see it in action.

Scala on Heroku in Two Minutes

Create a directory. Start with this sourcefile:

src/main/scala/Web.scala

import org.jboss.netty.handler.codec.http.{HttpRequest, HttpResponse}
import com.twitter.finagle.builder.ServerBuilder
import com.twitter.finagle.http.{Http, Response}
import com.twitter.finagle.Service
import com.twitter.util.Future
import java.net.InetSocketAddress
import util.Properties

object Web {
  def main(args: Array[String]) {
    val port = Properties.envOrElse("PORT", "8080").toInt
    println("Starting on port:"+port)
    ServerBuilder()
      .codec(Http())
      .name("hello-server")
      .bindTo(new InetSocketAddress(port))
      .build(new Hello)
  }
}

class Hello extends Service[HttpRequest, HttpResponse] {
  def apply(req: HttpRequest): Future[HttpResponse] = {
    val response = Response()
    response.setStatusCode(200)
    response.setContentString("Hello from Scala!")
    Future(response)
  }
}

Add the following files to declare dependencies and build with sbt, the simple build tool for Scala:

project/build.properties

sbt.version=0.11.0

build.sbt

import com.typesafe.startscript.StartScriptPlugin

seq(StartScriptPlugin.startScriptForClassesSettings: _*)

name := "hello"

version := "1.0"

scalaVersion := "2.8.1"

resolvers += "twitter-repo" at "http://maven.twttr.com"

libraryDependencies ++= Seq("com.twitter" % "finagle-core" % "1.9.0", "com.twitter" % "finagle-http" % "1.9.0")

Declare how the app runs with a start script plugin and Procfile:

project/build.sbt

resolvers += Classpaths.typesafeResolver

addSbtPlugin("com.typesafe.startscript" % "xsbt-start-script-plugin" % "0.3.0")

Procfile

web: target/start Web

Commit to Git:

$ git init
$ git add .
$ git commit -m init

Create an app on the Cedar stack and deploy:

$ heroku create --stack cedar
Creating warm-frost-1289... done, stack is cedar
http://warm-frost-1289.herokuapp.com/ | git@heroku.com:warm-frost-1289.git
Git remote heroku added

$ git push heroku master
Counting objects: 14, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (9/9), done.
Writing objects: 100% (14/14), 1.51 KiB, done.
Total 14 (delta 1), reused 0 (delta 0)

-----> Heroku receiving push
-----> Scala app detected
-----> Building app with sbt v0.11.0
-----> Running: sbt clean compile stage
       Getting net.java.dev.jna jna 3.2.3 ...
       ...
       [success] Total time: 0 s, completed Sep 26, 2011 8:41:10 PM
-----> Discovering process types
       Procfile declares types -> web
-----> Compiled slug size is 43.1MB
-----> Launching... done, v3
       http://warm-frost-1289.herokuapp.com deployed to Heroku

Then view your app on the web!

$ curl http://warm-frost-1289.herokuapp.com
Hello from Scala!

Dev Center: Getting Started with Scala on Heroku/Cedar

Language and Community

Scala is designed as an evolution of Java that addresses the verbosity of Java syntax and adds many powerful language features such as type inference and functional orientation. Java developers who have made the switch to Scala often say that it brings fun back to developing on the JVM. Boilerplate and ceremony are replaced with elegant constructs, to express intent in fewer lines of code. Developers get all the benefits of the JVM — including the huge ecosystem of libraries and tools, and a robust and performant runtime — with a language tailored to developer happiness and productivity.

Scala is strongly- and statically-typed, like Java (and unlike Erlang and Clojure). Its type inference has much in common with Haskell.

Yet, Scala achieves much of the ease of use of a dynamically-typed language (such as Ruby or Python). Though there are many well-established options for dynamically-typed open source languages, Scala is one of the few with compile-time type safety which is also both practical and pleasant to use. The static vs dynamic typing debate rages on, but if you're in the type-safe camp, Scala is an obvious choice.

Language creator Martin Odersky's academic background shines through in the feel of the language and the community. But the language's design balances academic influence with approachability and pragmatism. The result is that Scala takes many of the best ideas from the computer science research world, and makes them practical in an applied setting.

Members of the Scala community tend to be forward-thinking, expert-level Java programmers; or developers from functional backgrounds (such as Haskell or ML) who see an opportunity to apply the patterns they love in a commercially viable environment.

There is some debate about whether Scala is too hard to learn or too complex. One answer is that the language is still young enough that learning resources aren't yet fully-baked, although Twitter's Scala School is one good resource for beginners. But perhaps Scala is simply a sharper tool than Java: in the hands of experts it's a powerful tool, but copy-paste developers may find themselves with self-inflicted wounds.

Scala Days is the primary Scala conference, although the language is well-represented at cross-community conferences like Strange Loop.

The language community has blossomed, and is now in the process of accumulating more and more mainstream adoption. Community members are enthusiastic about the language's potential, making for an environment that welcomes and encourages newcomers.

Open Source Projects

Open source is thriving in the Scala world. The Lift web framework is a well-known early mover, but the last two years have seen an explosion of new projects showcasing Scala's strengths.

Finagle is a networking library coming out of the Twitter engineering department. It's not a web framework in the sense of Rails or Django, but rather a toolkit for creating network clients and servers. The server builder is in some ways reminiscent of the Node.js stdlib for creating servers, but much more feature-full: fault-tolerance, backpressure (rate-limiting defense against attacks), and service discovery to name a few. The web is increasingly a world of connected services, and Finagle (and Scala) are a natural fit for that new order.

Spark runs on Mesos (a good example of hooking into the existing JVM ecosystem) to do in-memory dataset processing, such as this impressive demo of loading all of Wikipedia into memory for lightning-fast searches. Two other notable projects are Akka (concurrency middleware) and Play! (web framework), which we'll look at shortly.

The Path Forward for Java?

Some Java developers have been envious of modern, agile, web-friendly languages like Ruby or Python — but they don't want to give up type safety, the Java library ecosystem, or the JVM. Leaders in the Java community are aware of this stagnation problem and see alternate JVM languages as the path forward. Scala is the front-runner candidate on this, with support from influential people like Bruce Eckel, Dick Wall and Carl Quinn of the Java Posse, and Bill Venners.

Scala is a natural successor to Java for a few reasons. Its basic syntax is familiar, in contrast with Erlang and Clojure: two other functional, concurrency-focused languages which many developers find inscrutable. Another reason is that Scala's functional and object-oriented mix allows new developers to build programs in an OO model to start with. Over time, they can learn functional techniques and blend them in where appropriate.

Working with Java libraries from Scala is trivial and practical. You can not only call Java libraries from Scala, but go the other way — provide Scala libraries for Java developers to call. Akka is one example of this.

There's obvious overlap here between Scala as a reboot of the Java language and toolchain, and the Play! web framework as a reboot of Java web frameworks. Indeed, these trends are converging, with Play! 2.0 putting Scala front-and-center. The fact that Play! can be used in a natural way from both Java and Scala is another testament to JVM interoperability. Play 2.0 will even use sbt as the builder and have native Akka support.

Typesafe and Akka

Typesafe is a new company emerging as a leader in Scala, with language creator Martin Odersky and Akka framework creator Jonas Bonér as co-founders. Their open-source product is the Typesafe Stack, a commercially-supported distribution of Scala and Akka.

Akka is an event-driven middleware framework with emphasis on concurrency and scale-out. Akka uses the actor model with features such as supervision hierarchies and futures.

The Heroku team worked closely with Typesafe on bringing Scala to our platform. This collaboration produced items like the xsbt-start-script-plugin, and coordination around the release of sbt 0.11.

Havoc Pennington of Typesafe built WebWords, an excellent real-world demonstration of using Akka's concurrency capabilities to scrape and process web pages. Try it out, then dig in on the sourcecode and his epic Dev Center article explaining the app's architecture in detail. Havoc also gave an educational talk at Dreamforce about Akka, Scala, and Play!.

Typesafe: we enjoyed working with you, and look forward to more productive collaboration in the future. Thanks!

Conclusion

Scala's explosive growth over the past two years is great news for both Java developers and for functional programming. Scala on Heroku, combined with powerful toolsets like Finagle and Akka, are a great fit for the emerging future of connected web services.

Further reading:

Special thanks to Havoc Pennington, Jeff Smick, Steve Jenson, James Ward, Bruce Eckel, and Alex Payne for alpha-testing and help with this post.


Comments

September 28 2011

Getting Started with MMS

Telling someone “You should set up monitoring” is kind of like telling someone “You should exercise 20 minutes three times a week.” Yes, you know you should, but your chair is so comfortable and you haven’t keeled over dead yet.

For years*, 10gen has been planning to do monitoring “right,” making it painless to monitor your database. Today, we released the MongoDB Monitoring Service: MMS.

MMS is free hosted monitoring for MongoDB. I’ve been using it to help out paying customers for a while, so I thought I’d do a quick post on useful stuff I’ve discovered (documentation is… uh… a little light, so far).

So, first: you sign up.

There are two options: register a company and register another account for an existing company. For example, let’s say I wanted to monitor the servers for Snail in a Turtleneck Enterprises. I’ll create a new account and company group. Then Andrew, sys admin of my heart, can create an account with Snail in a Turtleneck Enterprises and have access to all the same monitoring info.

Once you’re registered, you’ll see a page encouraging you to download the MMS agent. Click on the “download the agent” link.

This is a little Python program that collects stats from MongoDB, so you need to have pymongo installed, too. Starting from scratch on Ubuntu, do:

$ # prereqs
$ sudo apt-get install python easy_install
$ sudo easy_install pymongo
$
$ # set up agent
$ unzip name-of-agent.zip
$ cd name-of-agent
$ mkdir logs
$
$ # start agent
$ nohup python agent.py > logs/agent.log 2>&1 &

Last step! Back to the website: see that “+” button next to the “Hosts” title?

Designed by programmers, for Vulcans

Click on that and type a hostname. If you have a sharded cluster, add a mongos. If you have a replica set, add any member.

Now go have a nice cup of coffee. This is an important part of the process.

When you get back, tada, you’ll have buttloads of graphs. They probably won’t have much on them, since MMS will have been monitoring them for all of a few minutes.

Cool stuff to poke

This is the top bar of buttons:

Of immediate interest: click “Hosts” to see a list of hosts.

You’ll see hostname, role, and the last time the MMS agent was able to reach this host. Hosts that it hasn’t reached recently will have a red ping time.

Now click on a server’s name to see all of the info about it. Let’s look at a single graph.

You can click & drag to see a smaller bit of time on the graph. See those icons in the top right? Those give you:

+
Add to dashboard: you can create a custom dashboard with any charts you’re interested in. Click on the “Dashboard” link next to “Hosts” to see your dashboard.
Link
Link to a private URL for this chart. You’ll have to be logged in to see it.
Email
Email a jpg of this chart to someone.
i
This is maybe the most important one: a description of what this chart represents.

That’s the basics. Some other points of interest:

  • You can set up alerts by clicking on “Alerts” in the top bar
  • “Events” shows you when hosts went down or came up, because primary or secondary, or were upgraded.
  • Arbiters don’t have their own chart, since they don’t have data. However, there is an “Arbiters” tab that lists them if you have some.
  • The “Last Ping” tab contains all of the info sent by MMS on the last ping, which I find interesting.
  • If you are confused, there is an “FAQ” link in the top bar that answers some common questions.

If you have any problems with MMS, there’s a little form at the bottom to let you complain:

This will file a bug report for you. This is a “private” bug tracker, only 10gen and people in your group will be able to see the bugs you file.

* If you ran mongod --help using MongoDB version 1.0.0 or higher, you might have noticed some options that started with --mms. In other words, we’ve been planning this for a little while.

September 27 2011

Stack Overflow Scala Tutorial

Scala is a general purpose programming language principally targeting the Java Virtual Machine. Designed to express common programming patterns in a concise, elegant, and type-safe way, it fuses both imperative and functional programming styles. Its key features are: statically typed; advanced type-system with type inference; function types; pattern-matching; implicit parameters and conversions; operator overloading; full interop with Java

Scala Logo

Scala is a general purpose programming language principally targeting the Java Virtual Machine. Designed to express common programming patterns in a concise, elegant, and type-safe way, it fuses both imperative and functional programming styles. Its key features are:

  • Statically typed
  • Advanced type-system with type inference and declaration-site variance
  • Function types (including anonymous) which support closures
  • Pattern-matching
  • Implicit parameters and conversions which support the typeclass and pimp my library patterns
  • Mixin composition
  • Full interop with Java

For more information, see the official Scala Introduction.

Stack Overflow Scala Tutorial

  1. Introduction to Scala
  2. Variables/values
  3. Methods
  4. Literals, statements and blocks
  5. Loops/recursion
  6. Data structures / Collections
  7. For-comprehension
  8. Enumeration
  9. Pattern-matching
  10. Classes, objects and types
  11. Packages, imports and visibility identifiers
  12. Inheritance
  13. Extractors
  14. Case classes
  15. Parameterized types
  16. Traits
  17. Self references
  18. Error handling
  19. Type handling
  20. Annotations
  21. Functions/Function literals
  22. Type safety
  23. Implicits
  24. Pimp-my-library pattern
  25. Actors
  26. Use Java from Scala and vice versa
  27. XML literals
    • Explanation
  28. Scala Swing
  29. Type Programming
  30. Functional Scala

Further learning

  1. Learning Resources
  2. Operator precedence
  3. Scala blogs to follow
  4. Scala style

Comments

September 25 2011

Run a Node PostgreSQL App on Heroku

I really like the pg PostgreSQL library by Brian Carlson, and considering the amount of attention we’ve given to Redis and MongoDB on DailyJS I thought it was time to give relational databases some coverage again.

Heroku is one of many services that supports Node. This tutorial will demonstrate how easy it is to get a simple Express and pg app running.

This tutorial is based on the following documentation:

The files for this tutorial can be found here: alexyoung / dailyjs-heroku-postgres.

Getting Started

An account at Heroku is required first. Next, install the Heroku client:

Once that’s installed, try typing heroku help in a terminal to see what the command-client client can do. Heroku obviously realised that us developers prefer using the command-line to a GUI — although some basic management features are available through Heroku’s web interface, almost everything is handled from the command-line tool.

Authentication is requried before progressing:

heroku login

I had to tell Heroku about my public SSH key too:

heroku keys:add ~/.ssh/id_rsa.pub

Module Installation

Heroku wisely supports npm, so our app begins with a package.json:

{ 
  "name": "dailyjs-heroku-postgres"
, "version": "0.0.1"
, "dependencies": {
    "express": "2.4.5"
  , "pg": "0.5.7"
  }
}

PostgreSQL Setup

Heroku uses environmental variables to supply database connection parameters. This is simply process.env.DATABASE_URL for PostgreSQL. Connecting to the database is as simple as this:

var pg = require('pg').native
  , connectionString = process.env.DATABASE_URL || 'postgres://localhost:5432/dailyjs'
  , client
  , query;

client = new pg.Client(connectionString);
client.connect();
query = client.query('SELECT * FROM mytable');
query.on('end', function() { client.end(); });

Notice how pg uses events — I’ve called client.end() so this script will exit gracefully when it’s finished. If you’ve got PostgreSQL installed locally you could try experimenting with this script.

Schema

There’s a few ways to change the database schema on Heroku. I’ve made a little schema creation script:

var pg = require('pg').native
  , connectionString = process.env.DATABASE_URL || 'postgres://localhost:5432/dailyjs'
  , client
  , query;

client = new pg.Client(connectionString);
client.connect();
query = client.query('CREATE TABLE visits (date date)');
query.on('end', function() { client.end(); });

I’ll explain how to run this on Heroku later.

Another option would be to use a library like node-migrate by TJ Holowaychuk. I haven’t actually used this before, but it seems like a sensible way to keep local schemas in sync as developers work on a project.

Typing heroku help pg shows the commands available for PostgreSQL, and this includes heroku pg:psql which can be used to open a remote connection to a dedicated database. This won’t be allowed for a shared database, but could be used to modify the schema.

Example App

Now we’ve got a package.json, we just need an app to run. Create a file called web.js that starts like this:

var express = require('express')
  , app = express.createServer(express.logger())
  , pg = require('pg').native
  , connectionString = process.env.DATABASE_URL || 'postgres://localhost:5432/dailyjs'
  , start = new Date()
  , port = process.env.PORT || 3000
  , client;

Notice how I use Heroku’s environmental variable for the database connection string and server port, or defaults for development purposes.

Now we can add the code required to connect to the database:

client = new pg.Client(connectionString);
client.connect();

A single Express route should suffice for this tutorial:

app.get('/', function(req, res) {
  var date = new Date();

  client.query('INSERT INTO visits(date) VALUES($1)', [date]);

  query = client.query('SELECT COUNT(date) AS count FROM visits WHERE date = $1', [date]);
  query.on('row', function(result) {
    console.log(result);

    if (!result) {
      return res.send('No data found');
    } else {
      res.send('Visits today: ' + result.count);
    }
  });
});

And we better start the app too:

app.listen(port, function() {
  console.log('Listening on:', port);
});

Procfile

The last thing we need is a file that tells Heroku what our main script is called. Create a file called Procfile:

web: node web.js

Deploying

Heroku uses Git for deployment, so set up a repo:

git init
git add .
git commit -m 'First commit'

Then run this command which creates a remote app on the service with a random name:

heroku create --stack cedar

It’ll give you the URL, but your app isn’t quite ready yet.

Now push the repo to make the magic happen:

git push heroku master

And tell Heroku you want to use a database:

heroku addons:add shared-database

And finally… run the schema creation script:

heroku run node schema.js

Hopefully you now have a little Node and PostgreSQL app running on Heroku!

If anything went wrong, Heroku’s documentation is excellent, and you can download my sample source here: alexyoung / dailyjs-heroku-postgres.

September 20 2011

Merging multiple RSS Feeds into one

Merging multiple RSS Feeds into one

RSS (Really Simple Syndication)  is a web feed format that is used to publish frequently updated work on websites.

If we need feeds from multiple locations, we can easliy merge it into one. The best tool with the perfect UI that I would recommend is Yahoo Pipes. Using Yahoo Pipes we can merge multiple RSS feeds and it has many customization options. You can create feeds as per your requirement. Yahoo Pipes filtering option gives you the flexibility to merge feeds as the way you want.

Following are the steps to merge multiple RSS feeds using Yahoo Pipes:

Step 1 : Visit http://pipes.yahoo.com/pipes/

Step 2 : Log in using your yahoo login details.

Step 3 : Click on create a pipe.

Step 4 : Drag and drop Fetch feed from left and enter the feed URL.

Step 5 :  Repeat step 4 for multiple feeds.

Likewise you can add multiple feed as per your needs.

Step 6 : You can also filter the feed data. Drag and drop Filter  from Operators-> filter and set your filtering parameters. You  can also add multiple operators as you need. For example:

You get your customized output in Pipe Output.

Step 7 : Save the pipe.

How to use your customized feed :

1. Go to  my pipes, here you can see list of pipes you have created, click on one of the pipe:

2. Click on Get RSS and your RSS link is ready.

You can embed this feed on your website and enjoy multiple feeds at one time. :)


Comments
Older posts are this way If this message doesn't go away, click anywhere on the page to continue loading posts.
Could not load more posts
Maybe Soup is currently being updated? I'll try again automatically in a few seconds...
Just a second, loading more posts...
You've reached the end.