Hazelcast Spring Integration


Beans Namespace

You can declare Hazelcast beans for Spring context using beans namespace (default spring beans namespace) as well to declare hazelcast maps, queues and others.
Hazelcast-Spring integration requires either hazelcast-spring jar or hazelcast-all jar in the classpath.
<bean id="instance" class="com.hazelcast.core.Hazelcast" factory-method="newHazelcastInstance">
    <constructor-arg>
        <bean class="com.hazelcast.config.Config">
            <property name="groupConfig">
                <bean class="com.hazelcast.config.GroupConfig">
                    <property name="name" value="dev"/>
                    <property name="password" value="pwd"/>
                </bean>
            </property>
            <!-- and so on ... -->
        </bean>
    </constructor-arg>
</bean>
<bean id="map" factory-bean="instance" factory-method="getMap">
    <constructor-arg value="map"/>
</bean>

Hazelcast Namespace

Hazelcast has Spring integration (requires version 2.5 or greater) since 1.9.1 using hazelcast namespace.
  • Add namespace xmlns:hz="http://www.hazelcast.com/schema/config" to beans tag in context file:
  • <beans xmlns="http://www.springframework.org/schema/beans"
           xmlns:hz="http://www.hazelcast.com/schema/config"
           xsi:schemaLocation="http://www.springframework.org/schema/beans
                    http://www.springframework.org/schema/beans/spring-beans-2.5.xsd
                    http://www.hazelcast.com/schema/config
                    http://www.hazelcast.com/schema/config/hazelcast-spring.xsd">
  • Use hz namespace shortcuts to declare cluster, its items and so on.

Hazelcast Instance

After that you can configure Hazelcast instance (node):
<hz:hazelcast id="instance">
        <hz:config>
            <hz:group name="dev" password="password"/>
            <hz:properties>
                <hz:property name="hazelcast.merge.first.run.delay.seconds">5</hz:property>
                <hz:property name="hazelcast.merge.next.run.delay.seconds">5</hz:property>
            </hz:properties>
            <hz:network port="5701" port-auto-increment="false">
                <hz:join>
                    <hz:multicast enabled="false"
                                  multicast-group="224.2.2.3"
                                  multicast-port="54327"/>
                    <hz:tcp-ip enabled="true">
                        <hz:members>10.10.1.2, 10.10.1.3</hz:members>
                    </hz:tcp-ip>
                </hz:join>
            </hz:network>
            <hz:map name="map"
                    backup-count="2"
                    max-size="0"
                    eviction-percentage="30"
                    read-backup-data="true"
                    cache-value="true"
                    eviction-policy="NONE"
                    merge-policy="hz.ADD_NEW_ENTRY"/>
        </hz:config>
    </hz:hazelcast>
As of version hazelcast 1.9.3, you can easily configure map-store and near-cache too. (For map-store you should set either class-name orimplementation attribute.)
 <hz:config>
           <hz:map name="map1">          
                    <hz:near-cache time-to-live-seconds="0" max-idle-seconds="60"
                       eviction-policy="LRU" max-size="5000"  invalidate-on-change="true"/>

                    <hz:map-store enabled="true" class-name="com.foo.DummyStore" 
                        write-delay-seconds="0"/>
           </hz:map>
           
          <hz:map name="map2">          
                    <hz:map-store enabled="true" implementation="dummyMapStore" 
                        write-delay-seconds="0"/>
           </hz:map>
  
          <bean id="dummyMapStore" class="com.foo.DummyStore" />
</hz:config>
It's possible to use placeholders instead of concrete values.
For instance, use property file app-default.properties for group configuration:
<bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
    <property name="locations">
        <list>
            <value>classpath:/app-default.properties</value>
        </list>
    </property>
</bean>
<hz:hazelcast id="instance">
    <hz:config>
        <hz:group
            name="${cluster.group.name}"
            password="${cluster.group.password}"/>
        <!-- ... -->
    </hz:config>
</hz:hazelcast>

Hazelcast client

Similar for client
<hz:client id="client"
    group-name="${cluster.group.name}" group-password="${cluster.group.password}">
    <hz:members>10.10.1.2:5701, 10.10.1.3:5701</hz:members>
</hz:client>

Maps, Queues and so on

You can declare beans for the following Hazelcast objects:
  • map
  • multiMap
  • queue
  • topic
  • set
  • list
  • executorService
  • idGenerator
  • atomicNumber
Example:
    <hz:map id="map" instance-ref="instance" name="map"/>
    <hz:multiMap id="multiMap" instance-ref="instance" name="multiMap"/>
    <hz:queue id="queue" instance-ref="instance" name="queue"/>
    <hz:topic id="topic" instance-ref="instance" name="topic"/>
    <hz:set id="set" instance-ref="instance" name="set"/>
    <hz:list id="list" instance-ref="instance" name="list"/>
    <hz:executorService id="executorService" instance-ref="instance" name="executorService"/>
    <hz:idGenerator id="idGenerator" instance-ref="instance" name="idGenerator"/>
    <hz:atomicNumber id="atomicNumber" instance-ref="instance" name="atomicNumber"/>

Caching And Processing 2TB Mozilla Crash Reports In Memory With Hazelcast

by Fuad Malikov

Mozilla processes TB's of Firefox crash reports daily using HBase, Hadoop, Python and Thrift protocol. The project is called Socorro, a system for collecting, processing, and displaying crash reports from clients. Today the Socorro application stores about 2.6 million crash reports per day. During peak traffic, it receives about 2.5K crashes per minute.

In this article we are going to demonstrate a proof of concept showing how Mozilla could integrate Hazelcast into Socorro and achieve caching and processing 2TB of crash reports with 50 node Hazelcast cluster. The video for the demo is available here.


Currently, Socorro has pythonic collectors, processors, and middleware that communicate with HBase via the Thrift protocol. One of the biggest limitations of the current architecture is that it is very sensitive to latency or outages on the HBase side. If the collectors cannot store an item in HBase then they will store it on local disk and it will not be accessible to the processors or middleware layer until it has been picked up by a cronjob and successfully stored in HBase. The same goes for processors, if they cannot retrieve the raw data and store the processed data, then it will not be available.

In the current implementation a collector stores the crash report into HBase and puts the id of the report to a queue stored in another Hbase table. The processors polls the queue and start to process the report. Given the stated problem of a potential HBase outage, the Metrics Team at Mozilla contacted us to explore the possible usage of Hazelcast Distributed Map, in front of the HBase. The idea is to replace the Thrift layer with Hazelcast client and store the crash reports into Hazelcast. An HBase Map store will be implemented and Hazelcast will write behind the entries to HBase via that persister. The system will cache the hot data and will continue to operate even if HBase is down. Beside HBase, Socorro uses Elastic Search to index the reports to search later on. In this design Elastic Search will also be fed by Hazelcast map store implementation. This enables Socorro to switch the persistence layer very easily. For small scale Socorro deployments, HBase usage becomes complex and they would like to have simpler storage options available.

So we decided to make a POC for Mozilla team and run it on EC2 servers. The details of the POC requirement can be found at the wiki page.. The implementation is available under Apache 2.0 license.

Distributed Design

The number one rule of distributed programming is "do not distribute". The throughput of the system increases as you move the operation to the data, rather than data to operation. In Hazelcast the entries are almost evenly distributed among the cluster nodes. Each node owns a set of entries. Within the application you need to process the items. There are two option here; you can either design the system with the processing in mind, and bring the data to the execution or you can take the benefit of data being distributed and do the execution on the node owning the data. If the second choice can be applied, we expect the system to behave much better in terms of latency, throughput, network and CPU usage. In this particular case the crash reports are stored in a distributed map called "crashReports" with the report id as the key. We would like to process a particular crash report on the node that owns it.

Collectors

A Collector puts the crash report into "crashReports" map and report id with the insert date as the value into another map called "process" for the reports that need to be processed. These two put operations are done within a transaction. Since both entries in 'crashReports' and 'process' maps have same key (report id), they are owned by the same node. In Hazelcast, the key determines the owner node.

Processors

A Processor on each node adds a localEntryListener on the map "process" and is notified when a new entry is added. Notice that the listener is not listening to the entire map, but only listens the local entries. There will be one and only one event per entry and that event will be delivered locally. Remember the "number one rule of distributed programming".

When the processor is notified of an insert, it gets the report, does the processing and puts updated item back into crashReports and removes the entry from "process" map. All these operations are local, except the put which does one more put on the backup node. There is one more thing missing here. What if the Processor crashes or was not able to process the report when it gets the notification? Good news, the information regarding a particular crash report that needs to be processed is still in the map "process". There will be another thread that loops through the local entries in "process" map and processes any lingering entries. Notice that under normal circumstances and if there is no backlog the map should be empty.

Data size and Latency

The object that we store in Hazelcast consists of:

a JSON document with a median size of 1 KB, minimum size of 256 bytes and maximum size of 20K.
a binary data blob with a median size of 500 KB; min 200 KB; max 20 MB; 75th percentile 5 MB.
This makes the average of 4 MB data. And given the latency and network bandwidth among the EC2 servers the latency and throughput of our application would be much better in a more real environment.

Demo Details

To run the application you only need to have the latest Hazelcast and Elastic Search jars. The entry point to the application is com.hazelcast.socorro.Node class. When you run the main method, It will generate one Collector and a Processor. So each node will be both a Collector and a Processor. The Collectors will simulate the generation of crash reports and will submit them into Data Grid. The Processors will receive the newly submitted crash reports, process them and store back to distributed Map. If you enable persistence, Hazelcast will persist the data to Elastic Search.

We run the application on a 50 node cluster of EC2 servers. In our simulation the average crash report size was 4MB. The aim was to store as many crash reports as we can, so we decided to go with largest possible memory which is 68.4 GB that is available at High-Memory Quadruple Extra Large Instance.
Hazelcast Nodes uses Multicast or TCP/IP to discover each other. On EC2 environment multicast is not enabled, but nodes should be able to see each other via TCP/IP. This requires passing the IP addresses to all of the nodes somehow. In the article "Running Hazelcast on a 100 node Amazon EC2 Cluster" we described a way of deploying an application on large scale on EC2. In this demo we used a slightly different approach. We developed a tool, called Cloud Tool that starts N number of nodes. The tool takes as an input the script where we describe how to run the application and the configuration file for Hazelcast. It builds the network configuration part with the IP addresses of nodes and copies (scp) both configuration and script to the nodes. The nodes, on start, waits for the configuration and script file. After they are copied, all nodes execute the script. It is the scripts responsibility to download and run the application. This way within minutes we can deploy any Hazelcast application on any number of nodes.

With this method way we deployed the Socorro POC app on 50 m2.4xlarge instances. By default each node generates 1 crash report per second. Which makes 3K crash reports per minute in the cluster. In the app, we are listening to Hazelcast topic called "command". This enables us externally increase and decrease the load. l3000 means "generate total of 3000 crash reports per minute".
We did the demo and recorded all the steps. We used the Hazelcast Management Center to visualize the cluster. Through this tool we can observe the throughput of the nodes, publish messages to the topic, and observe the behavior of the cluster.

Later, we implemented the persistence to Elastic Search. But It couldn't keep up with the data load that we want to persist. We started 5 Hazelcast Nodes and 5 ES Nodes. A backlog appears while persisting to ES. Our guess is, this is because of the EC2 IO. It seems that given the file size of average 4MB, there is no way of creating a 50 node cluster, making 3K reports per minute and storing them into ES. At least this seems not to be possible in an EC2 environment and with our current ES knowledge. We still started a relatively small cluster persisting to 5 node ES. Recorded until it was obvious than we can not persist everything from memory to ES. A 10 minute video is available here.

We think on Mozilla's environment it will be easy to store to HBase and with small deployments of Socorro ES will serve quite well.

Scalability

Another goal of POC was to demonstrate how one can scale up by adding more and more nodes. So we did another recording where we start with 5 nodes. The cluster is able to generate and process 600 crash reports per minute and stored 200 GB of data in memory. Then iteratively we add 5 more nodes and increase the load. We ended at 20 node cluster that was able to process 2400 reports per minute and store in memory 800 GB of data. We would like to highlight one thing here. To add a node does not immediately scale you up. It has it's initial cost which is backup and migrations. When new nodes arrive Hazelcast will start to migrate some of the partitions(buckets) from old to new nodes to achieve equal balance across the cluster. Once the partition is moved all entries falling into it will be stored on the node owning that partition. More information on this topic can be found here.

In this demo the amount of data that we cache is huge, thus migrations take some time. Also note that during the migration, a request to the migrated data should wait until the migration is completed for the sake of consistency. That's why it is not a good idea to add nodes when your cluster have significantly large amount of data and is under high load to witch it can barely respond.

An alternative approach

Finally we would like to discuss another approach that we tried but didn't record. For the sake of the throughput we chose to process the entries locally. This approach performed better but has two limitations.

All nodes of the cluster that own data should also be a processor. So you can not say that on my 50 node cluster; 20 are collectors and the rest are processor. All should be equal.
For a node to process the data, it should own it first. So if you add a new node, initially there will be no partitions and it will not participate on processing. As the partitions migrate it will participate more and more. You have to wait until the cluster is balanced to fully benefit from new coming nodes.
The second approach was to store the id's of unprocessed entries in a distributed queue. And Processors take ids from the queue and process the associated record. The data is not local here, most of the time a processor will be processing a record that it doesn't own. The performance will be worse but you can separate processors from collectors and when you add a processor to the cluster it will immediately start to do its job.


How Google Tests Software - Part Five


By James Whittaker 


Instead of distinguishing between code, integration and system testing, Google uses the language of small, medium and large tests emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engineering roles may execute any of these types of tests and they may be performed as automated or manual tests.

Small Tests are mostly (but not always) automated and exercise the code within a single function or module. They are most likely written by a SWE or an SET and may require mocks and faked environments to run but TEs often pick these tests up when they are trying to diagnose a particular failure. For small tests the focus is on typical functional issues such as data corruption, error conditions and off by one errors. The question a small test attempts to answer is does this code do what it is supposed to do?

Medium Tests can be automated or manual and involve two or more features and specifically cover the interaction between those features. I've heard any number of SETs describe this as "testing a function and its nearest neighbors." SETs drive the development of these tests early in the product cycle as individual features are completed and SWEs are heavily involved in writing, debugging and maintaining the actual tests. If a test fails or breaks, the developer takes care of it autonomously. Later in the development cycle TEs may perform medium tests either manually (in the event the test is difficult or prohibitively expensive to automate) or with automation. The question a medium test attempts to answer is does a set of near neighbor functions interoperate with each other the way they are supposed to?

Large Tests cover three or more (usually more) features and represent real user scenarios to the extent possible. There is some concern with overall integration of the features but large tests tend to be more results driven, i.e., did the software do what the user expects? All three roles are involved in writing large tests and everything from automation to exploratory testing can be the vehicle to accomplish accomplish it. The question a large test attempts to answer is does the product operate the way a user would expect?

The actual language of small, medium and large isn’t important. Call them whatever you want. The important thing is that Google testers share a common language to talk about what is getting tested and how those tests are scoped. When some enterprising testers began talking about a fourth class they dubbed enormous every other tester in the company could imagine a system-wide test covering nearly every feature and that ran for a very long time. No additional explanation was necessary.

The primary driver of what gets tested and how much is a very dynamic process and varies wildly from product to product. Google prefers to release often and leans toward getting a product out to users so we can get feedback and iterate. The general idea is that if we have developed some product or a new feature of an existing product we want to get it out to users as early as possible so they may benefit from it. This requires that we involve users and external developers early in the process so we have a good handle on whether what we are delivering is hitting the mark.

Finally, the mix between automated and manual testing definitely favors the former for all three sizes of tests. If it can be automated and the problem doesn’t require human cleverness and intuition, then it should be automated. Only those problems, in any of the above categories, which specifically require human judgment, such as the beauty of a user interface or whether exposing some piece of data constitutes a privacy concern, should remain in the realm of manual testing.

Having said that, it is important to note that Google performs a great deal of manual testing, both scripted and exploratory, but even this testing is done under the watchful eye of automation. Industry leading recording technology converts manual tests to automated tests to be re-executed build after build to ensure minimal regressions and to keep manual testers always focusing on new issues. We also automate the submission of bug reports and the routing of manual testing tasks. For example, if an automated test breaks, the system determines the last code change that is the most likely culprit, sends email to its authors and files a bug. The ongoing effort to automate to within the “last inch of the human mind” is currently the design spec for the next generation of test engineering tools Google is building.

Those tools will be highlighted in future posts. However, my next target is going to revolve around The Life of an SET. I hope you keep reading.

How Google Tests Software - Part Three

Lots of questions in the comments to the last two posts. I am not ignoring them. Hopefully many of them will be answered here and in following posts. I am just getting started on this topic.

At Google, quality is not equal to test. Yes I am sure that is true elsewhere too. “Quality cannot be tested in” is so cliché it has to be true. From automobiles to software if it isn’t built right in the first place then it is never going to be right. Ask any car company that has ever had to do a mass recall how expensive it is to bolt on quality after-the-fact.

However, this is neither as simple nor as accurate as it sounds. While it is true that quality cannot be tested in, it is equally evident that without testing it is impossible to develop anything of quality. How does one decide if what you built is high quality without testing it?

The simple solution to this conundrum is to stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Better yet, plan the tests while you code or even before. Test isn’t a separate practice, it’s part and parcel of the development process itself. Quality is not equal to test; it is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.

At Google this is exactly our goal: to merge development and testing so that you cannot do one without the other. Build a little and then test it. Build some more and test some more. The key here is who is doing the testing. Since the number of actual dedicated testers at Google is so disproportionately low, the only possible answer has to be the developer. Who better to do all that testing than the people doing the actual coding? Who better to find the bug than the person who wrote it? Who is more incentivized to avoid writing the bug in the first place? The reason Google can get by with so few dedicated testers is because developers own quality. In fact, teams that insist on having a large testing presence are generally assumed to be doing something wrong. Having too large a test team is a very strong sign that the code/test mix is out of balance. Adding more testers is not going to solve anything.

This means that quality is more an act of prevention than it is detection. Quality is a development issue, not a testing issue. To the extent that we are able to embed testing practice inside development, we have created a process that is hyper incremental where mistakes can be rolled back if any one increment turns out to be too buggy. We’ve not only prevented a lot of customer issues, we have greatly reduced the number of testers necessary to ensure the absence of recall-class bugs. At Google, testing is aimed at determining how well this prevention method is working. TEs are constantly on the lookout for evidence that the SWE-SET combination of bug writers/preventers are screwed toward the latter and TEs raise alarms when that process seems out of whack.

Manifestations of this blending of development and testing are all over the place from code review notes asking ‘where are your tests?’ to posters in the bathrooms reminding developers about best testing practices, our infamous Testing On The Toilet guides. Testing must be an unavoidable aspect of development and the marriage of development and testing is where quality is achieved. SWEs are testers, SETs are testers and TEs are testers.

If your organization is also doing this blending, please share your successes and challenges with the rest of us. If not, then here is a change you can help your organization make: get developers fully vested in the quality equation. You know the old saying that chickens are happy to contribute to a bacon and egg breakfast but the pig is fully committed? Well, it's true...go oink at one of your developer and see if they oink back. If they start clucking, you have a problem.

What should a developer know before building a public web site?

from stackoverflow:
http://stackoverflow.com/questions/72394/what-should-a-developer-know-before-building-a-public-web-site


The idea here is that most of us should already know most of what is on this list. But there just might be one or two items you haven't really looked into before, don't fully understand, or maybe never even heard of.
Interface and User Experience
  • Be aware that browsers implement standards inconsistently and make sure your site works reasonably well across all major browsers. At a minimum test against a recent Gecko engine (Firefox), a Webkit engine (SafariChrome, and some mobile browsers), your supported IE browsers (take advantage of the Application Compatibility VPC Images), and Opera. Also consider how browsers render your site in different operating systems.
  • Consider how people might use the site other than from the major browsers: cell phones, screen readers and search engines, for example. — Some accessibility info: WAI and Section508, Mobile development: MobiForge
  • Staging: How to deploy updates without affecting your users. Ed Lucas's answer has some comments on this.
  • Don't display unfriendly errors directly to the user
  • Don't put users' email addresses in plain text as they will get spammed to death
  • Build well-considered limits into your site - This also belongs under Security.
  • Learn how to do progressive enhancement
  • Always redirect after a POST.
  • Don't forget to take accessibility into account. It's always a good idea and in certain circumstances it's a legal requirementWAI-ARIA is a good resource in this area.
Security
Performance
  • Implement caching if necessary, understand and use HTTP caching properly as well as HTML5 Manifest
  • Optimize images - don't use a 20 KB image for a repeating background
  • Learn how to gzip/deflate content (deflate is better)
  • Combine/concatenate multiple stylesheets or multiple script files to reduce number of browser connections and improve gzip ability to compress duplications between files
  • Take a look at the Yahoo Exceptional Performance site, lots of great guidelines including improving front-end performance and their YSlow tool. Google page speed is another tool for performance profiling. Both require Firebug installed.
  • Use CSS Image Sprites for small related images like toolbars (see the "minimize http requests" point)
  • Busy web sites should consider splitting components across domains. Specifically...
  • Static content (ie, images, CSS, JavaScript, and generally content that doesn't need access to cookies) should go in a separate domain that does not use cookies, because all cookies for a domain and it's subdomains are sent with every request to the domain and its subdomains. One good option here is to use a Content Delivery Network (CDN).
  • Minimize the total number of HTTP requests required for a browser to render the page.
  • Utilize Google Closure Compiler for JavaScript and other minification tools
  • Make sure there’s a favicon.ico file in the root of the site, i.e. /favicon.icoBrowsers will automatically request it, even if the icon isn’t mentioned in the HTML at all. If you don’t have a/favicon.ico, this will result in a lot of 404s, draining your server’s bandwidth.
SEO (Search Engine Optimization)
  • Use "search engine friendly" URL's, i.e. use example.com/pages/45-article-title instead ofexample.com/index.php?page=45
  • Don't use links that say "click here". You're wasting an SEO opportunity and it makes things harder for people with screen readers.
  • Have an XML sitemap, preferably in the default location /sitemap.xml.
  • Use <link rel="canonical" ... /> when you have multiple URLs that point to the same content
  • Use Google Webmaster Tools and Yahoo Site Explorer
  • Install Google Analytics right at the start (or an open source analysis tool like Piwik)
  • Know how robots.txt and search engine spiders work
  • Redirect requests (using 301 Moved Permanently) asking for www.example.com to example.com(or the other way round) to prevent splitting the google ranking between both sites
  • Know that there can be bad behaving spiders out there
  • If you have non-text content look into Google's sitemap extensions for video, etc. There is some good information about this in Tim Farley's answer.
Technology
  • Understand HTTP and things like GET, POST, sessions, cookies, and what it means to be "stateless".
  • Write your XHTML/HTML and CSS according to the W3C specifications and make sure theyvalidate. The goal here is to avoid browser quirks modes and as a bonus make it much easier to work with non-standard browsers like screen readers and mobile devices.
  • Understand how JavaScript is processed in the browser.
  • Understand how JavaScript, style sheets, and other resources used by your page are loaded and consider their impact on perceived performance. It may be appropriate in some cases to move scripts to the bottom of your pages.
  • Understand how the JavaScript sandbox works, especially if you intend to use iframes.
  • Be aware that JavaScript can and will be disabled, and that Ajax is therefore an extension not a baseline. Even if most normal users leave it on now, remember that NoScript is becoming more popular, mobile devices may not work as expected, and Google won't run most of your JavaScript when indexing the site.
  • Learn the difference between 301 and 302 redirects (this is also an SEO issue).
  • Learn as much as you possibly can about your deployment platform
  • Consider using a Reset Style Sheet
  • Consider JavaScript frameworks (such as jQueryMooTools, or Prototype), which will hide a lot of the browser differences when using JavaScript for DOM manipulation
Bug fixing
  • Understand you'll spend 20% of the time coding and 80% of it maintaining, so code accordingly
  • Set up a good error reporting solution
  • Have some system for people to contact you with suggestions and criticism.
  • Document how the application works for future support staff and people performing maintenance
  • Make frequent backups! (And make sure those backups are functional) Ed Lucas's answer has some advice. Have a Restore strategy, not just a Backup strategy.
  • Use a version control system to store your files, such as Subversion or Git
  • Don't forget to do your Unit Testing. Frameworks like Selenium can help.



The cruel, hard facts:
Users spend as much time on your website as an interviewer does reading your resume when submitted in a pile of thousands of others
  • Users spend very little time on your website: Read, seconds.
  • Users are lazy and they would rather be somewhere else
  • If the user can't find what they are looking for within seconds, they leave
  • If the user cannot identify what the website is all about, they leave
  • If the website does not 'just work', they leave
  • If the website annoys the user or does not appeal aesthetically to him, they leave
Everything about websites and website design revolves around these facts.
  • Clear Navigation
  • Conciseness
  • Branding strategies
  • Colors, schemes, aesthetics, text placement, text formatting
  • Helpful, not hindering, Ajax/JavaScript
  • Not reinventing the wheel when it comes to website use, navigation, etc.
This is just an outline on why it is so important to adhere to standards and read those website design books.