The Fifth Conference

As mentioned on their website, The Fifth Conference is a platform for vision, innovation and entrepreneurship. They publish a quarterly journal, organise events and make their website.

On Wednesday June 9th, 2010, The Fifth Conference organized GROW, a panel discussion on growth and enterpreneurship. It took place in the evening from 19h00 ’til 23h00 in the machineroom of the Thermotechnical Institute of the K.U. Leuven. And what a venue (see the pictures) that is for a conference that find its inspiration in the 5th Solvay Conference Physics on Electrons and Photons.  In October 1927, the world’s leading physicists debated in Brussels the newly formulated quantum theory. It brought about 29 scientists of whom 17 were or became at least one time Nobel Prize winners!

The GROW event spurted quite an interesting format: the evening was divided in two equal chunks of 3 presentations on more or less provocative ideas, to instigate the subsequent panel debate. As with any debate of this type, focus tends to shift quite often. But in this post , I’ll try to highlight some of the major topics discussed.

Growth

Psychologists and philosophers seem to state that man is the happiest and most comfortable in a progressive state, rather than at its full complement. It is even hard in the stationary, and miserable in the declining state. So there seems to be a tendency, even inclination, to strive for growth. Frank Boermeester (The Fifth Conference) adds a number of facts in his presentation to make the point that Belgium at this moment is in a declining state. A number of structural fault lines create a difficult environment to start and/or grow a business.

Note however that growth comes in different forms and can mean different things when applied to different situations. So one has to carefully evaluate the situation and adapt to his proper goals. Jeroen Meens (Cynex) applies this to 4 stages in corporate evolution:

  • as Startup you should focus on your product and resist temptations to sidestep, but adjust the business plan to reality and keep your ideal customer in mind,
  • when reaching your Ceiling start looking for scalability in your business and make time to seek leverage in what you are doing
  • until you reach the point for the Great Decision to look for the added value and properly assess the risk(s),
  • so to Extend by focusing on the long term strategy and as owner by letting go of the organization to become an entity of itself.

Advice

To succesfully evolve through those different phases, all panel members seemed to agree on one fundamental requirement to start, run or grow a business: you need to attract proper advisors. Make sure you have a sounding board at your disposal to reflect on your ideas and criticize your decisions; to learn from practical experience. That sounding board can have many different faces, e.g. an auditor or board of advisors.

Maarten Vandenbroucke (Gatewing; presentation) rightly raised the question how to attract advice when, as a startup, you do not have sufficient resources. And how to  distinguish true mentors from consultants? Just asking the number of businesses a candidate advisor has started or runned himself, must be the first litmus test, so says the panel. Somebody from the audience also pointed out that most seasoned enterpreneurs would be more than willing to share their wealth of experience.

Cashflow

The first rule dictated by experience, happens to be the common theme through all the different presentations of the event: manage your cashflow … thoroughly. Make sure your invoices are paid so your not the bank of your customers. Hold an eye on expenses. Sven Bally (Colibricard.be) spends a full presentation on the topic and summarizes as

Revenue is vanity, Profit is sanity, Cash is reality

Jurgen Ingels (Clear2pay) explains how to turn simple time sheet information into dashboards to manage by financial metrics. This allows his company to balance growth and profitability. His advice? Swap a year of growth with a year of profitability!

But how to build you business cost efficiently? Steven Coppens (GIMV) conveys a good approach for growth in his presentation: first build, next proof scalability, then duplicate. It’s like going from proof of concept through construction into production.

Most common reasons for …

So it’s clear that growing an enterprise is the art of balancing a lot of different business facets. You can imagine that, during the debates, this reflection did lead to the biggest question on business reality: What are the most common grounds for business failure? A summary …

  • Founders share the same background
  • Me, founder, I have to be the CEO
  • Engineer driven
  • Forgotten to sell
  • Under-capitalized to realize the goals

Mobile Camp Brussels 2010

Some weeks ago I heard some echos over the twitter channel about #mcbxl. On Saturday, May 8th, 2010, the first edition of MobileCamp Brussels would be organized by some guys that I was following. I decided to register so as to check out the state of art of mobile computing, mobile internet and the real-time web in general.

Barcamp

This event would also be my first barcamp,  so I was pretty excited about that too. A barcamp is a free, user-generated unconference by means of  open, particapatory workshop-events, whose content is provided by the participants themselves. So, you can summarize this event as an unplanned geek gathering. Hey, if you know most participants only by their Twitter names … But what an experience that barcamp was! I love the small scale. I adore the interactivity. Energizing! A big thanks goes to the sponsors for the first class venue, catering, etc; to the organizers @emich@steffest@kodel and @janosizoltan for … well, organizing this event; and to all the speakers for sharing their experience and knowledge.

State of the mobile development art

Now off to mobile apps. Well, this subtitle says it all. That’s what I seem to have learned at #mcbxl. Development for mobile platforms still seems more of an art than craftsmenship. Yes, for mobile apps you have a totally different target audience and medium, so, yes, you should therefore completely realign your apps. This is what @kodel also explains in his presentation on Mobile Interaction Models. But, as proven by @Steffest‘s geek’o’tar and his presentation on cross-device development, while a fantastic exploit, it makes one wonder whether the mobile devices are becoming to mobile development what the browsers are to web development: a testing nightmare. The mobile devices apparently have even more variability in features than browsers have bugs. @gregone shares his experiences during his talk on Designing for Touch Screens.

Largest common denominator for mobile applications?

The cases presented at #mcbxl, have fairly specific requirements. Indeed, not everyone wants to write his own keyboard application, isn’t it? However, MobileCamp Brussels wasn’t too clear about what features cross-platform development tools like Titanium offer and up to what point  mobile web applications can take us. @kodel explicitly mentions to stick with native apps for now, forcing you to choose your deployment platform. Hence, he rightfully says to very carefully choose, depending on your audience!

Or limitless possibilities of the mobile devices?

The reason to go native? Remember that a good mobile app takes the user’s context into account for the services offered. Therefore access to the full features of the mobile device are most likely a key requirement. Hence, the need (for now) to choose for native applications … to fully exploit all possibilities of those devices, only limited by our own creativity.

TIY = Try It Yourself

As with everything in technology, you can only thoroughly make up you mind, after you Tried It Yourself. Mobile web, cross-platform or native, MobileCamp Brussels certainly stung my desire ;-) to experiment myself. I’ll be rigging my mobile development environment pretty soon.

So please check back once and a while if you are interested to read about my findings. On the other hand, If you yourself have some ideas or experiences to share, please do not hesitate to leave a comment!

MIT Personas, more than a stunning data mining and -visualization project

By way of some interweb connections, I also came across this amazing MIT project, Personas. It’s actually quite a strange thing, that leaves you pondering about a few things internet related. To quote the project’s site:

Personas uses sophisticated natural language processing and the internet to create a data portrait of one’s aggregated online identity. In short, Personas shows you how the Internet sees you.

MIT Personas result on Wim Van Leuven

MIT Personas result on Wim Van Leuven

What is Personas?

The basic idea of Personas is actually rather simple:

Just enter your name and Personas scours the web for information and attempts to characterize the person – to fit them to a predetermined set of categories that an algorithmic process created from a massive corpus of data. The computational process is visualized with each stage of the analysis, finally resulting in the presentation of a seemingly authoritative personal profile.

So, from a technical perspective, Personas is much more than a more than clean website design and Flash application. It is intrincsically just an amazing combination of huge dataset processing, stunning data visualization, but also gorgeous algorithm visualization. On the latter topic: you just see the algorithm mining the dataset, which is fabulous! What I find so intriguing to appealing infographs, is that they tend to address the right hemisphere of our brain more than the left.

MIT Personas working on Wim Van Leuven

MIT Personas working on Wim Van Leuven

But what does Personas mean?

If you think about this neat technological result, there’s more to Personas than just “showing how the internet sees you“. On the one hand, it does give you that idea about the ubiquity and long-time persistence of your online profile . Meet your publicy, as this facet of the internet has been coined. Stowe Boyd has a very nice article written about the 3 facets of one’s person: secrecy, privacy and publicy. Welcome to the decade of publicy, where interactions and tools “default to things being open and with open access, rather than concealing things and limiting access to those explicitly invited“. So, Personas is again a very good reminder to be careful with the breadcrumb trail you create across the internet. Well, not to be careful, but be conscious. Being self-conscious about your publicy, gives you control, because you are careful about your paper-trail.

On the other hand, the Personas project also points out another very important aspect of the current state of the internet:

In a world where fortunes are sought through data-mining vast information repositories, the computer is our indispensable but far from infallible assistant. Personas demonstrates the computer’s uncanny insights and its inadvertent errors, such as the mischaracterizations caused by the inability to separate data from multiple owners of the same name. It is meant for the viewer to reflect on our current and future world, where digital histories are as important if not more important than oral histories, and computational methods of condensing our digital traces are opaque and socially ignorant.

As the internet is run and scavenged by computers, which are just algorithmic and not intelligent (“The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.” – Edsger W. Dijkstra), one has to be careful about the information retrieved by searching or mining the net. On the other hand, regarding publicy and its constant tension with privacy, the Personas project proves there’s a realistic risc in retrieving confusing information coming from alternate sources. So it’s a matter of being master of Google’s first pages (Dutch) as Bart De Waele states it. Otherwise said, control the conversation about your profile. Become the only source of information about yourself. Or your company for that matter. However, as this might be a very labour intensive undertaking, as a last resort, there is not only a possibility, but also an opportunity, not only in falsifying information (p. 28) to protect one’s privacy. but in smoke screening your publicy by actively dropping false information, maybe even using mirror identities.

Self regulation

Which proves a conclusive point, stated also by Laurent Haug: “Self regulation is already underway“. Which works in both directions: “You want to spy on me? I will feed you with fake data to push the envelope to where I want it to be … In the contrary, if you give users a system they can trust, one where they can control what is controllable, then they will share the data advertisers need.

What can I do with the concrete Personas result?

In short? Nothing! It is what it is: an animation and final visualization … that indicate something, but nothing concrete. It can give you an idea or overview of what can be found on the interweb when people search you. You can use it as such in your internet toolchest.

Scrum Guide

Just thought I’d share … Yesterday, I came across a nice piece of reading material for everyone interested in Scrum as agile methodology. The Scrum Guide by Ken Schwaber on ScrumAlliance is a condense (14 pages) and recent (May 2009) pocket guide on the major principles and guidelines of Scrum. It’s also gives very practical advice and tips on concrete issues when implementing Scrum.

I think this is an excellent reference manual for anyone interested in a good overview on Scrum. Even a document  to keep at hand when working with Scrum on a project.

Design Driven Development

Anybody that is working on software engineering, software development and/or agile methodologies, will be very interested in the ideas from the Design Driven Development (D3) website.

Premise

The premise they start with is that the design of any system is an accident that kicks in at conception. Hence, maximizing the opportunities to make that accidents happen is the key for (product) innovation.

Agility

Thereto, the author(s) define procedures and practices on how to integrate design in your iterative product life cycle. D3 makes the clear distinction between the management, engineering and design aspects of product development. Most current agile practices focus on the former aspects, whereas D3 introduces the latter by integrating design games into the project iteration, nl. at the start of the sprints, where the design games can provide input to the product backlogs.

Design Games

D3 turns the design practices into set of games, which brings different sets of people, skills and experiences together to make design decisions in a collaborative way. D3 describes 11 different design games, which are grouped into five different categories: Startup, Understand, Question, Design and Experience.

But first and foremost, D3 is about focusing on the solution and not the problem. D3  can be as simple as the hilarious example laid out in their blog.

I4

D3 defines 4 fundamental elements of good design:

  • Innovation is larger level breakthrough in solving the indented problem
  • Interaction is about how software or products behave with the users
  • Information is how you arrange the different elements on the screen
  • Intelligence focus on little things which can change the usability of an application.

In this way, D3 tries to bring design to the higher level of the solution space, whereas design used to remain at the product’s code and/or architecture level. The solution is the boundary where the product ends and thus where you as a solution builder can have impact. The higher levels of business and life on the other hand need to be impacted by other means.

Design Cube

D3 also recognizes that no process can guarantee a better design. Creating the right environment with the right set of people is the only way to bring innovation and design. Guidelines to this are laid out in the Design Cube, which defines the people, culture and environment aspects which can greatly contribute to build an innovative organization.

Conclusion

The ideas laid out in the D3 approach seem to be very viable. However, they have to be tested into practice to prove if they do bring enough value to the solution development in the form of product innovation. Most certainly, more practical guidelines, best practices, procedures and tools will have to be defined.

I do have some projects on my radar that might be helped by incorporating the ideas of D3. But in the mean time, does anyone have any practical experience with the D3 or other principles on entailing product innovation in solution development?

Kindle … 1st impression

As Christmas gift, I received an Amazon Kindle from my loving family. Thank you all!

Amazon Kindle

Amazon Kindle

The amazon Kindle is an electronic book reading device, that spurts wireless connectivity to, amongst others, shop and download e-books. The e-book reader comes in two versions: the standard version with a 6″ display, and the Kindle DX with a larger 9,7″ display. However, only the standard version supports global connectivity … 3G! So, I got the 6″ version.

In a couple of blog posts, I’d like to share my impressions and experiences using this device.

Unwrapping

The Kindle and matching leather cover was ordered through Amazon.com (of course) and got delivered after a few days at home. No complaints about that. Both came as separate parcels, in a nice dark blue gift wrapping with yellow ribbon and to-be-filled-in greeting card.

Kindle tear off strip

Unwrapping, reveals sturdy cardboard boxes marked as “guaranteed 100% frustration free packaging“. Indeed, a tear off strip on the side allows the top side to fold open. It’s shown at Warwick Poole‘s and Obsessable’s.

The box contains the Kindle obviously, a getting started guide and power cable. The power cable actually is an microUSB to USB to power plug cable. Thus you can take off the power plug to get a standard USB cable. The power plug is an American plug, which is incompatible in Europe. I don’t find this is a problem, because I’ll most likely hook the device up to my laptop … to charge also.

Uhhh? Is that the screen?

Kindle screen cover image

Kindle screen cover image

Taking out the actual device, shows a drawing on the screen surface on how to start the device and hook it up to the power cord. OK, let’s peal off that protective paper cover and have a look at this nice, comfortable feeling device.

And than it hits you in the face … that is no sticker! That’s the device’s screen actually showing the drawing!!! As seen on Obsessable, it is razor sharp, just like a printed drawing. And how is it even possible to maintain that drawing without draining the battery?

Electronic Paper Display

The Kindle uses an electronic paper display (EPD) enabled by electronic ink (e-ink) that results in those razor sharp drawings and texts that are actually qualitatively equal to printed paper. Reading on the Kindle is thus a real pleasure and I couldn’t imagine any self respecting reading device adopting LCD, which is so tiring for your eyes. On the Kindle you can actually read an e-book for hours. And that’s what it is meant for.

On the other hand, however, the EPD limits the Kindle at being a reading device. It only can do grey scale, no colors, and it reacts rather slowly. Going to the next page of a book, actually kind of fades out and in to the next page. This seems to be inherent to the technology and takes about a second. It is however not a problem during  normal reading, but you can not quickly page through a book.I do not experience this as a big problem, but sometimes it’s  a bother if you’d like to quickly browse back a number of pages for something you just read.

A few other advantages of EPD: bright, high contrast, full viewing angle, sunlight readable, thin, light weight, shatterproof, flexible. Ideally suited for e-book reading devices.

Low power consumption

But 2 specific characteristics makes EPD ultra low on power consumption. First, there is no need for a backlight; second, the screen does not need energy to maintain the ‘image’ on the screen, only to change it! Charged, the Kindle should be able to last for weeks, when being somewhat ‘economic’ (e.g. turning off wireless). I’ll keep you posted on that aspect of the device.

Kindle's splash screen

Kindle's splash screen

But, not needing energy to maintain the screen image, is the feature Amazon uses to install the startup screen that you see when unpacking the Kindle. It’s also used to give the device a nice schwung: on stand-by a random image from some famous writer or book is put out on the screen … and stays there! Always a nice surprise when you finish reading. Note here the wording … stand-by… I left it at that and my kindle was out of battery after a few days.

Conclusion

So, my first impression is that the Kindle is perfect at what it is: a great reading device for electronic books. But it can do much more … I’ll be posting more articles on my Kindle experiences in the next few weeks. So check back once and a while.

Atom for ReST Data Representation

During the last weeks, I’ve been designing an architecture for a new application that’ll have to live in the cloud. Of course a ReSTful API comes as a natural fit. I must confess that I fell in love with ReST since, due to it’s inherent simplicity and naturality. And great libraries like Restlet or JAX-RS implementations exist to support development. Did I mention I’m using Java … ?

One of the major decisions to make when designing a ReST interface, would of course be data representation. Numerous options present themselves here, but after some digging and reading, the Atom Syndication Format combined with XML can be a universal data exchange envelope. especially when combined with some extensions like e.g. OpenSearch to support paging and searching. Atom’s extensive use of links with relation attributes, amongst others, make it a natural fit to ReST. Again, great libraries exist to ease the development of such Atom feeds, that even support the needed extendibility for the data envelope. These libraries even integrate nicely with the aforementioned technologies for ReST development.

The most renown use of Atom with XML for ReSTful service data representation would of course be the Google Data Protocol. However, when applying its philosophy to our own ReSTful services using various combinations of development libraries, implementation felt difficult every time, even awkward. At some point, I even considered re-implementing Atom libraries to support the needed extendibility of the generated feeds. Something smelt fishy!

Doing some more research, I came across articles from Atom authorities on how to (Ellias Torres) and how not to (James M. Snell, Rob Yates) extend Atom feeds. They also express their views on GData (Joe Gregorio, Clinton DeWitt).

I got my Aha Erlebnis this week when those guys pointed out where most extensions of Atom go wrong. There is a clear distinction between the data and the envelope. You can extend both, but for different reasons.

To support e.g. OpenSearch, you have to extend the envelope, because you want more intelligent data producing servers and data consuming clients. In developing ReST services, this type of extensions is more static as you will provide the extended envelope information with e.g. the search data, in every feed. So, e.g. you’ll super-class your micro-framework to support this extension once and for all.

On the other hand however, to transport your data, you have to extend you feeds at the data transport layer, which are the entries in Atom parlance. You’ll provide a representation for every entity you whish to ship, that you wrap in an Atom entry as its content. Hence, this wrapping will be a lot more dynamic in your system.

Applying above insights using different development libraries, the pieces of the puzzle fell into place as all libraries supported envisioned feed layouts very naturally. So I ended up choosing those libraries that supported my developments the best, which would of course be POJO style.

Installing HBase on Windows

HBase is a distributed, column-oriented store, modeled after Google’s BigTable. HBase is built on top of Hadoop for its MapReduce and distributed file system implementation. All these projects are open-source and part of the Apache Software Foundation.

As being distributed, large scale platforms, the Hadoop and HBase projects mainly focus on *nix environments for production installations. However, being developed in Java, both projects are fully portable across platforms and, hence, also to the Windows operating system. For ease of development the projects rely on Cygwin to have a *nix-like environment on Windows to run the shell scripts.

Purpose

This document explains the intricacies of running HBase on Windows using Cygwin as an all-in-one single-node installation for testing and development. The HBase Overview and QuickStart guides on the other hand go a long way in explaning how to setup HBase in more complex deployment scenario’s.

Installation

For running HBase on Windows, 3 technologies are required: Java, Cygwin and SSH. The following paragraphs detail the installation of each of the aforementioned technologies.

Java

HBase depends on the Java Platform, Standard Edition, 6 Release. So the target system has to be provided with at least the Java Runtime Environment (JRE); however if the system will also be used for development, the Jave Development Kit (JDK) is preferred. You can download the latest versions for both from Sun’s download page. Installation is a simple GUI wizard that guides you through the process.

Cygwin

Cygwin is probably the oddest technology in this solution stack. It provides a dynamic link library that emulates most of a *nix environment on Windows. On top of that a whole bunch of the most common *nix tools are supplied. Combined, the DLL with the tools form a very *nix-alike environment on Windows.

For installation, Cygwin provides the setup.exe utility that tracks the versions of all installed components on the target system and provides the mechanism for installing or updating everything from the mirror sites of Cygwin.

To support installation, the setup.exe utility uses 2 directories on the target system. The Root directory for Cygwin (defaults to C:\cygwin) which will become / within the eventual Cygwin installation; and the Local Package directory (e.g. C:\cygsetup that is the cache where setup.exe stores the packages before they are installed. The cache must not be the same folder as the Cygwin root.

Perform following steps to install Cygwin, which are elaboratly detailed in the 2nd chapter of the Cygwin User’s Guide:

  1. Make sure you have Administrator privileges on the target system.
  2. Choose and create you Root and Local Package directories. A good suggestion is to use C:\cygwin\root and C:\cygwin\setup folders.
  3. Download the setup.exe utility and save it to the Local Package directory.
  4. Run the setup.exe utility,
    1. Choose the Install from Internet option,
    2. Choose your Root and Local Package folders
    3. and select an appropriate mirror.
    4. Don’t select any additional packages yet, as we only want to install Cygwin for now.
    5. Wait for download and install
    6. Finish the installation
  5. Optionally, you can now also add a shortcut to your Start menu pointing to the setup.exe utility in the Local Package folder.
  6. Add CYGWIN_HOME system-wide environment variable that points to your Root directory.
  7. Add %CYGWIN_HOME%\bin to the end of your PATH environment variable.
  8. Reboot the sytem after making changes to the environment variables otherwise the OS will not be able to find the Cygwin utilities.
  9. Test your installation by running your freshly created shortcuts or the Cygwin.bat command in the Root folder. You should end up in a terminal window that is running a Bash shell. Test the shell by issuing following commands:
    1. cd / should take you to thr Root directory in Cygwin;
    2. the LS commands that should list all files and folders in the current directory.
    3. Use the exit command to end the terminal.
  10. When needed, to uninstall Cygwin you can simply delete the Root and Local Package directory, and the shortcuts that were created during installation.

SSH

HBase (and Hadoop) rely on SSH for interprocess/-node communication and launching remote commands. SSH will be provisioned on the target system via Cygwin, which supports running Cygwin programs as Windows services!

  1. Rerun the setup.exe utility.
  2. Leave all parameters as is, skipping through the wizard using the Next button until the Select Packages panel is shown.
  3. Maximize the window and click the View button to toggle to the list view, which is ordered alfabetically on Package, making it easier to find the packages we’ll need.
  4. Select the following packages by clicking the status word (normally Skip) so it’s marked for installation. Use the Next button to download and install the packages.
    1. OpenSSH
    2. tcp_wrappers
    3. diffutils
    4. zlib
  5. Wait for the install to complete and finish the installation.

HBase

Download the latest release of HBase from the website. As the HBase distributable is just a zipped archive, installation is as simple as unpacking the archive so it ends up in its final installation directory. Notice that HBase has to be installed in Cygwin and a good directory suggestion is to use /usr/local/ (or [Root directory]\usr\local in Windows slang). You should end up with a /usr/local/hbase-<version> installation in Cygwin.

This finishes installation. We go on with the configuration.

Configuration

There are 3 parts left to configure: Java, SSH and HBase itself. Following paragraphs explain eacht topic in detail.

Java

One important thing to remember in shell scripting in general (i.e. *nix and Windows) is that managing, manipulating and assembling path names that contains spaces can be very hard, due to the need to escape and quote those characters and strings. So we try to stay away from spaces in path names. *nix environments can help us out here very easily by using symbolic links.

  1. Create a link in /usr/local to the Java home directory by using the following command and substituting the name of your chosen Java environment:
    LN -s /cygdrive/c/Program\ Files/Java/<jre name> /usr/local/<jre name>
  2. Test your java installation by changing directories to your Java folder CD /usr/local/<jre name> and issueing the command ./bin/java -version. This should output your version of the chosen JRE.

SSH

Configuring SSH is quite elaborate, but primarily a question of launching it by default as a Windows service.

  1. On Windows Vista and above make sure you run the Cygwin shell with elevated privileges, by right-clicking on the shortcut an using Run as Administrator.
  2. First of all, we have to make sure the rights on some crucial files are correct. Use the commands underneath. You can verify all rights by using the LS -L command on the different files. Also, notice the auto-completion feature in the shell using <TAB> is extremely handy in these situations.
    1. chmod +r /etc/passwd to make the passwords file readable for all
    2. chmod u+w /etc/passwd to make the passwords file writable for the owner
    3. chmod +r /etc/group to make the groups file readable for all
    1. chmod u+w /etc/group to make the groups file writable for the owner
    1. chmod 755 /var to make the var folder writable to owner and readable and executable to all
  3. Edit the /etc/hosts.allow file using your favorite editor (why not VI in the shell!) and make sure the following two lines are in there before the PARANOID line:
    1. ALL : localhost 127.0.0.1/32 : allow
    2. ALL : [::1]/128 : allow
  4. Next we have to configure SSH by using the script ssh-host-config
    1. If this script asks to overwrite an existing /etc/ssh_config, answer yes.
    2. If this script asks to overwrite an existing /etc/sshd_config, answer yes.
    3. If this script asks to use privilege separation, answer yes.
    4. If this script asks to install sshd as a service, answer yes. Make sure you started your shell as Adminstrator!
    5. If this script asks for the CYGWIN value, just <enter> as the default is ntsec.
    6. If this script asks to create the sshd account, answer yes.
    7. If this script asks to use a different user name as service account, answer no as the default will suffice.
    8. If this script asks to create the cyg_server account, answer yes. Enter a password for the account.
  5. Start the SSH service using net start sshd or cygrunsrv --start sshd. Notice that cygrunsrv is the utility that make the process run as a Windows service.
  6. Confirm that you see a message stating that the CYGWIN sshd service  was started succesfully.
  7. Harmonize Windows and Cygwin user account by using the commands:
    1. mkpasswd -cl > /etc/passwd
    2. mkgroup --local > /etc/group
  8. Test the installation of SSH:
    1. Open a new Cygwin terminal
    2. Use the command whoami to verify your userID
    3. Issue an ssh localhost to connect to the system itself
      1. Answer yes when presented with the server's fingerprint
      2. Issue your password when prompted
      3. test a few commands in the remote session
      4. The exit command should take you back to your first shell in Cygwin
    4. Exit should terminate the Cygwin shell.

HBase

If all previous configruration is working properly, we just need some tinkering at the HBase config files to properly resolve on Windows/Cygwin. All files and paths referenced here start from the HBase [installation directory] as working directory.

  1. HBase uses the ./conf/hbase-env.sh to configure its dependencies on the runtime environment. Copy and uncomment following lines just underneath their original, change them to fit your environemnt. They should read something like:
    1. export JAVA_HOME=/usr/local/<jre name>
    2. export HBASE_IDENT_STRING=$HOSTNAME as this most likely does not inlcude spaces.
  2. HBase uses the hbase-default.xml file for configuration. Some properties do not resolve to existing directories because the JVM runs on Windows. This is the major issue to keep in mind when working with Cygwin: within the shell all paths are *nix-alike, hence relative to the root /. However, every parameter that is to be consumed within the windows processes themself, need to be Windows settings, hence C:\-alike. Change following propeties in the configuration file, adjusting paths where necessary to conform with your own installation:
    1. hbase.rootdir must read e.g. file:///C:/cygwin/root/tmp/hbase/data
    2. hbase.tmp.dir must read C:/cygwin/root/tmp/hbase/tmp
    3. hbase.zookeeper.quorum must read 127.0.0.1 because for some reason localhost doesn't seem to resolve properly on Cygwin.
  3. Make sure the configured hbase.rootdir and hbase.tmp.dir directories exist and have the proper rights set up e.g. by issuing a chmod 777 on them.

Testing

This should conclude the installation and configuration of HBase on Windows using Cygwin. So it's time to test it.

  1. Start a Cygwin terminal, if you haven't already.
  2. Change directory to HBase installation using CD /usr/local/hbase-<version>, preferably using auto-completion.
  3. Start HBase using the command ./bin/start-hbase.sh
    1. When prompted to accept the SSH fingerprint, answer yes.
    2. When prompted, provide your password. Maybe multiple times.
    3. When the command completes, the HBase server should have started.
    4. However, to be absolutely certain, check the logs in the ./logs directory for any exceptions.
  4. Next we start the HBase shell using the command ./bin/hbase shell
    1. Create a simple table using command create 'test', 'data'
    2. Verify the table exists using the command list
    3. Insert data into the table using e.g.
      put 'test', 'row1', 'data:1', 'value1'
      put 'test', 'row2', 'data:2', 'value2'
      put 'test', 'row3', 'data:3', 'value3'
    4. List all rows in the table using the command scan 'test' that should list all the rows previously inserted. Notice how 3 new columns where added without changing the schema!
    5. Finally we get rid of the table by issuing disable 'test' followed by drop 'test' and verified by list which should give an empty listing.
    6. Leave the shell by exit
  5. To stop the HBase server issue the ./bin/stop-hbase.sh command. And wait for it to complete!!! Killing the process might corrupt your data on disk.
  6. In case of problems,
    1. verify the HBase logs in the ./logs directory.
    2. Try to fix the problem
    3. Get help on the forums or IRC (#hbase@freenode.net). People are very active and keen to help out!
    4. Stopr, restart and retest the server.

Conclusion

Now your HBase server is running, start coding and build that next killer app on this particular, but scalable datastore!