Wednesday, June 17, 2009

How to effectively use a Search Engine?

Most of the people think that Search Engine is a website where we can search a term or a phrase and the search engine gives back the results. When this is absolutely true, the power of a Search Engine is not limited to just searching a few terms/phrases. Search Engines are a very powerful tool that can help us find virtually everything on the internet.

At present there are three major search engines competing in market, Google, Yahoo and Bing! In this article let’s unleash the power of a search engine and see how effectively we can use it. But before going ahead with the details, let’s first brush-up our basics about a search engine.

What is a Search Engine?
A Web search engine is a tool designed to search for information on the World Wide Web. The search results are usually presented in a list and are commonly called hits. The information may consist of web pages, images, information and other types of files.

Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots.txt. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags).

Now that we know what the search engine is, let’s begin by understanding how to effectively use search terms or phrases. Badly given search terms may result in unexpected search results. Hence it is very important to know – What to search for?

Building Effective Search Words/Phrases
When we want to search a particular article, we first think of what is the focused area of that article. For example, if I want to search for “anonymous methods in C#”, then my search phrase should contain the highly occurring words from that article. These words should be spelled correctly and should be short and to-the-point. Let’s see some of the key points for making searches more effective:
  • Spell words correctly
  • Remember to leave a space between each word in your search query
  • Use most effective words – i.e. words that you expect to occur frequently in that web site. The choice of words makes a lot difference to the search engines. Remember that every word matters to the search engine!
  • Use OR and NOT keywords to combine or exclude words. The support for all or some of these keywords differs from search engine to search engine. Also, note that the words OR and NOT should be capitalized
  • Use fewer descriptive words, or try words that have a different but similar meaning. This may result in different search results and the one you are expecting 
  • Search for exact phrases by placing the search words within quotation marks
  • Do not use long search phrases, since search engines limit the number of words that could be searched
  • Click on the category to see category-specific search results such as web, images, people, maps etc.
Here are few key points to be noted about most of the search engines:
  • Search engines aren’t case sensitive
  • Common words such as a, the, an, as etc are ignored by the search engine. These are called as STOP words. If you want these words to be included, then enclose them in double quotation marks
  • If you are searching for a date, then make sure you use standard date formats. Any custom date format that you are using might not be known to the search engine. Also as far as possible, use month names instead of their integer equivalents
  • There is no need to use the word “AND” in your search query. By default all the searches are “AND” searches .i.e. they will use your search words using AND. For example: “green trees” means “green AND trees”
Apart from these you can use binary operators and wildcards to fine tune your search results:
  • Phrase Search (“”): As discussed earlier, use double quotation marks for searching exact phrases.
  • Search within a website (site:[site_name]): The search of a query can be limited to a particular website by including the term “site:[site_name]” for example: C# This search query will search for C# in the website
  • Search for a title (intitle:[search_words]): If you want to search for pages with specific title, then you can use the “intitle” tag for searching the query. For example: intitle:Sandeep, will search all the pages which has title containing word “sandeep”.
  • Search exact term (+): By attaching a + immediately before a word (remember, don't add a space after the +), you can get the search results for the precisely typed word. This will specifically ignore any synonyms.
  • Terms to exclude (-):Attaching a minus sign immediately before a word indicates that you do not want pages that contain this word to appear in your results. This is typically used to exclude a word from your search result. The (-) sign works similar to the NOT keyword we discussed above.
  • Fill in the blanks (*): The * represents a wildcards and indicates that the search engine should consider * as a set of unknown words.
Apart from these generic tips, there are some search engine specific tips that you can find on their respective help pages. By following these common rules you can make an effective search.

Searching for Files
It is often required to search for files on the internet. And at the same time it is very difficult to visit all the websites and check if the required file is present in that website or not. For example if I want to search for PowerPoint presentations on C#. I can search the presentations i.e. a file having extension .ppt or .pptx using the “filetype” tag as shown below:

Search Query: C# filetype:ppt

This will give me search results that directly points to the PowerPoint presentation which are related to C#. This is supported by Google, Yahoo and Bing.

Finding Vulnerabilities Using Search Engine
This might sound weird at first sight, but this is absolutely true. Hackers or Malicious attackers make use of search engines to find web based vulnerabilities. A powerful search engine is actually a helpful tool for hackers for finding various flaws and vulnerabilities. Let’s understand this using an example. TSWEB is a tool used to acquire a Remote Desktop Connection via Internet. Many of the companies expose their computer systems via TSWEB for flexible operation and controlling of the system. You need to expose an URL for letting internet user access you computer system via TSWEB. The TSWEB tool is specifically used by system administrators to control the systems remotely. But when this exposed URL gets indexed by search engine, it actually gets exposed to Hackers. Hackers search for TSWEB enabled system using the search engine and then these systems are attacked. Hence it is a best practice to hide such URL’s from hackers, using robots.txt. Thus it becomes important to make the website Search Engine Optimized for efficient searching and security reasons.

Search Engine As Calculator
Yes! Your search engine is your handy online calculator. Search engines are made intelligent enough to search for mathematical calculations and at the same time performing those calculations and giving out the result. So if you search for 5+2 in search engine, then it will give the result as 7 and will also search for websites containing the expression 5+2. Most of the arithmetic calculations are performed by search engines.

Custom Search Engine For Your Website
Often we require a search engine which will search for keywords ONLY in our site or in a list of site. This is typically required for products site, where a huge list of products are display on the web and the user wants to search for a particular product on your site. This can be achieved by creating a custom search engine. Both Google and Bing provide facility for creating a custom search engine that suits your needs. Bing also provides programmatic interface to the search engine where it sends the search result in the form of XML or JSON. Bing API’s are exposed for achieving this.

Why Not Earn Money From Search Engines?
Yes, search engines provide facility to embed advertisements into the result set of your custom search engine. When users click on these advertisements, you get paid. Well this is not the primary use of search engine, but definitely one of the most used options for earning ;)

Search Engine Optimization
Search engine optimization (SEO) is the process of improving the volume or quality of traffic to a web site from search engines via "natural" ("organic" or "algorithmic") search results. By following the SEO practices you increase the probability of your site appearing at higher positions in the search result. It’s important to note that SEO practices provide the potential for higher content coverage and ranking, but do not guarantee it. SEO discussion is out of scope for this topic. You can read my blog on SEO tips for search engines.

This is only a part of the Search Engine capabilities that we have discussed here. The Search Engine is a huge beast! Kudos to Google, Yahoo and Bing for providing such powerful search engines!

Tuesday, June 16, 2009

SEO and IIS SEO Toolkit

I have created a presentation to help you understand the basics of Search Engine Optimization (SEO). SEO techniques are used to optimize the web pages for search engines. Some key SEO tips and tricks helps us increase the page rank and eventually list our site in the top few search results.
The presentation also has links for the usage of the IIS SEO Toolkit.

SEO and the IIS SEO Toolkit

Tuesday, June 2, 2009

Bing! Microsoft's new search engine is live!

It' Bing!

Bing is a phonic sound to indicate you found the thing you were searching for. I think, by the term "Bing" Microsoft wants to tell that you can find the things that you are searching at Microsoft has evolved the search engines from to Bing has a very powerful and improved search engine. Even the indexing mechanism has been improved. Microsoft claims Bing to be a decision engine rather than just a search engine.

Lets see, how Stefan Weitz, a director on the Microsoft Search team, discusses the development of Bing around users’ needs, focusing on four key areas: speed, relevance, previews and multi-media.

You can find the press release of Bing here.

Monday, June 1, 2009

General Motors (GM) files bankruptcy!

General Motors (GM) has filed for Chapter 11 bankruptcy protection Monday morning, submitting its reorganization papers to a federal clerk in Lower Manhattan. This is in accordance to the Obama administration's plan to shrink the automaker to a sustainable size and give a majority ownership stake to the federal government. This has made the 100 years old giant to ruin in soil. GM was one of the largest auto makers in America and had a strong economic hold. About 20,000 workers are expected to lose their jobs due to this bankruptcy directly and numerous indirectly.

GM's bankruptcy is going to have a huge impact on Indian economy too. Many of the Indian giants such as TCS, Infosys hold a share in GM's growth, which has now stunted. The recession is deepening and there are no signs of getting a rescue from this situation. Let’s hope to improve this situation in next two or three quarters. If this situation continues, it’s going to create more criminals than wise men.

Let’s keep our fingers crossed…and try to improve this situation!

Thursday, May 28, 2009

Google Page Rank Checker

You can embed this Page Rank Checker on your website using the code given below:

<iframe style="BORDER-BOTTOM: 0px; BORDER-LEFT: 0px; WIDTH: 600px;

src="" frameborder="no"></iframe>
In the next post I will be giving some cool images for displaying the page rank on the website.

Monday, May 25, 2009

ASP.NET Ajax 4.0 by Stephen Walther - TechEd Presentation

Here is a tech-ed presentation delivered by Stepehen Walther on ASP.NET Ajax 4.0 at Hyderabad.


Friday, May 22, 2009

How to create an IE8 Web Slice in ASP.NET?

Web Slice is a cool feature in IE8!

In frequently updating web sites, for monitoring status we need to visit those web sites often. Usually we keep the URL to monitor in our favorites list and hit the web site whenever required. When we hit the web site the entire page gets loaded, but our point of interest is only a small updating portion of the web site. This usually happens when visiting the stock updates web site. We want the updated stock, which is actually a very small portion of the web site. But for getting those updates we need to load the entire web page. Is there an option to view only that small updated portion of the web site? YES, indeed there is an option to only view the small updated portion of the web site with IE8's Web Slice feature!

Using Web Slices user can add small snippets of a web site in the IE favorite toolbar and monitor their updates. These Web Slices needs to be enabled during web site creation. Please note that this feature is only supported in IE8. Figure below shows the Web Slice for the updated section.

Figure 1: Web Slice

How to create a Web Slice?
To enable a WebSlice on your site, just add HTML annotations to your webpage. A WebSlice uses a combination of the hAtom Microformat and the WebSlice format.
<div class="hslice" id="item123">
<p class="entry-title">Stock: Reliance Petro</p>
<div class="entry-content">BSE: XXX, NSE: XXX

These three annotations helps IE recognize that it is a WebSlice and treat it like a feed; handling the discovery, subscription, and processing of the WebSlice. You can also add additional properties to a WebSlice, such as expiration, time-to-live value, and an alternative source as shown below:

<div class="hslice" id="datafound">
<p class="entry-title">Stock: Reliance Petro</p>
<a rel="feedurl"
href="http://localhost:24730/StockInfo/DataFoundUpdate.aspx#datafound-update" />
In the above sample we have used a URL for the feed as "http://localhost:24730/StockInfo/DataFoundUpdate.aspx#datafound-update". Please note that this URL has ID of the container DIV preceded by "#" as "#datafound-update". It is better to have separate aspx page for showing the updates, because this separate page will be lightweight and hence can be rendered quickly. The DataFoundUpdate.aspx page mentioned in the above example has code as shown below:
<html xmlns="" >
<head runat="server">
<title>Untitled Page</title>
<form id="form1" runat="server">
<div class="hslice" id="datafound-update">
<h2 class="entry-title">Data Found Report</h2>
<a class="entry-content" rel="entry-content"
href="http://localhost:24730/StockInfo/SilverlightDisplay.aspx" />
In the above code we have referenced a silverlight page, just to show rich UI interface to the user. Instead you can also render the updated content. Authentication is also enabled for the Web Slice. You can set User name and Password by changing the Properties of a Web Slice. The Web Slice properties can be change by right clicking the favorite slice --> Properties.

Some important links on Web Slice:
1. More information on Web Slice
2. Watch Web Slice Video
3. Download the source code

Here is a cool framework developed for Creating Web Slices in ASP.NET at CodePlex.
Hope this helps you!

Thursday, May 21, 2009

Microsoft to Ban MemCopy()

The C runtime library was created about 25 years ago, when the threats to the computers were altogether different. The computers were not interconnected and were majorly used for professional purposes. But today, almost everybody has his own computer connected in a network or to the internet. Thus the network threats to the computer has increased and so the coding vulnerabilities.

Let's take a look at What MemCopy() function does..

The MEMCopy() intrinsic function is used to efficiently copy blocks of data from one memory array to another.

void MEMCopy( source_ptr, destination_ptr, num_bytes );

any ptr source_ptr;
A pointer to a source memory block. Pointer can be of any type.

any ptr destination_ptr;
A pointer to the destination memory block. Pointer can be of any type.

int num_bytes;
The number of bytes of data to copy.

Copy 10000 bytes starting 5000 bytes into array src to a newly allocated destination buffer (dst).

local byte ptr src, byte ptr dst
dst = EAlloc(byte,10000)
call MEMCopy(src + 5000, dst, 10000)

Thus, the MemCopy() function is primarily responsible for copying blocks of memory from one location to another. Later this year Microsoft is planning to ban this API for security reasons. There is a whole list of API's that are banned due to security reasons, which you can find here.

How to trace visitor information in ASP.NET?

It is often required to trace or gather the details of the visitor for maintaining website statistics. This can be easily done in ASP.NET using the Server variables and the Request information available. Various attributes such as remote host name, IP address, browser type and version etc can be known using the Server variables.

Source Code:
Response.Write("<b>Name:</b> " + Request.ServerVariables["REMOTE_HOST"] + "<br />");
Response.Write("<b>IP:</b> " + Request.ServerVariables["REMOTE_ADDR"] + "<br />");
Response.Write("<b>User agent:</b> " + Request.ServerVariables["HTTP_USER_AGENT"] + "<br />");
Response.Write("<b>Language:</b> " + Request.ServerVariables["HTTP_ACCEPT_LANGUAGE"] + "<br />");
Response.Write("<b>Browser:</b> " + Request.Browser.Browser + "<br />");
Response.Write("<b>Type:</b> " + Request.Browser.Type + "<br />");
Response.Write("<b>Version:</b> " + Request.Browser.Version + "<br />");
Response.Write("<b>Major version:</b> " + Request.Browser.MajorVersion + "<br />");
Response.Write("<b>Minor version:</b> " + Request.Browser.MinorVersion + "<br />");
Response.Write("<b>Beta:</b> " + Request.Browser.Beta + "<br />");
Response.Write("<b>Cookies:</b> " + Request.Browser.Cookies + "<br />");
Response.Write("<b>Frames:</b> " + Request.Browser.Frames + "<br />");
Response.Write("<b>Tables:</b> " + Request.Browser.Tables + "<br />");
Response.Write("<b>ActiveX:</b> " + Request.Browser.ActiveXControls + "<br />");
Response.Write("<b>Java Applets:</b> " + Request.Browser.JavaApplets + "<br />");
Response.Write("<b>JavaScript:</b> " + Request.Browser.JavaScript + "<br />");
Response.Write("<b>VBScript:</b> " + Request.Browser.VBScript + "<br />");
Response.Write("<b>Platform:</b> " + Request.Browser.Platform + "<br />");
Response.Write("<b>Crawler:</b> " + Request.Browser.Crawler + "<br />");
Download Source Code


Tip: This information is also gathered by hackers to find vulnerabilities on your machine!

You can download the source code here.
Hope this helps you!

Enhance website security with ASP.NET AJAX NoBot Control

It has been a common security attack to bombard a site with (n) number of requests per second. This type of attach will reduce the server response time and will make the system less usable. There are various mechanisms to prevent such attacks, one of them is the CAPTCHA security implementation. When using CAPTCHA security, the user (human) has to enter the code that appears on the image shown (see figure-1 below). The image may show a code, an arithmetic calculation etc.Thus the automated programs will not be able to enter the exact CAPTCHA code and will prevent unwanted requests to the website.

Figure -1

The NoBot Control

NoBot is an ASP.NET Ajax control that provides a CAPTCHA like security without any human intervention. The NoBot control provides a no human interaction security with simple JavaScript and server side logic. NoBot employs a few different anti-bot techniques:

  • Forcing the client's browser to perform a configurable JavaScript calculation and verifying the result as part of the postback. (Ex: the calculation may be a simple numeric one, or may also involve the DOM for added assurance that a browser is involved)
  • Enforcing a configurable delay between when a form is requested and when it can be posted back. (Ex: a human is unlikely to complete a form in less than two seconds)
  • Enforcing a configurable limit to the number of acceptable requests per IP address per unit of time. (Ex: a human is unlikely to submit the same form more than five times in one minute)

The NoBot control can be initialized as shown below:

CutoffMaximumInstances="5" />

The properties in italics are optional.
  • OnGenerateChallengeAndResponse - [Optional] EventHandler providing implementation of the challenge/response code
  • ResponseMinimumDelaySeconds - [Optional] Minimum number of seconds before which a response (postback) is considered valid
  • CutoffWindowSeconds - [Optional] Number of seconds specifying the length of the cutoff window that tracks previous postbacks from each IP address
  • CutoffMaximumInstances - [Optional] Maximum number of postbacks to allow by a single IP addresses within the cutoff window

A short video showing the usage of the NoBot control is given below:

Install Silverlight

Hope this helps you prevent unauthorized access..
Be secure.. Be safe!

Tuesday, May 19, 2009

New flaw found in IIS 6.0 - 18 May 09

Microsoft Internet Information Services (IIS) version 6.0 contains a vulnerability that could allow an unauthenticated, remote attacker to bypass security restrictions and access sensitive information.

The vulnerability is due to improper processing of Unicode characters in HTTP requests. An unauthenticated, remote attacker could exploit this vulnerability by sending a malicious HTTP request to the system. An exploit could allow the attacker to bypass security restrictions and download arbitrary files from the targeted system.

Exploit code is available.

Microsoft has not confirmed this vulnerability and updates are not available.

Courtesy: Cisco

A new flaw has been found in IIS 6.0 having WebDav. Cisco has reported the details of this flaw and Microsoft team is investigating around it. At present there is no patch available and it is recommended to disable WebDav till the patch is available.

The vulnerability is due to improper processing of Unicode characters in HTTP requests. When IIS is configured with WebDav, it improperly translates Unicode %c0%af (/) characters. Microsoft IIS may process an HTTP request that contains the character before requiring authentication to a protected resource. An unauthenticated, remote attacker could exploit this vulnerability by sending a malicious HTTP request to the targeted server. An exploit could allow the attacker to list directory contents or download protected files that are hosted by IIS without providing authentication credentials.

Courtesy: Cisco

Microsoft may soon release a patch to cover-up this vulnerability.

Windows API Code Pack for accessing Windows 7 features in .NET

The Windows API Code Pack provides a library for Microsoft .NET Framework that can be used to access new Windows 7 features and some of the features in Vista from managed code. The existing .NET framework does not encompass these features. This library can be used with .NET Framework 3.5.

The features included in the API code pack are:

  • Support for Windows Shell namespace objects, including the new Windows 7 libraries, Known Folders and non file system containers.
  • Windows Vista and Windows 7 Task Dialogs.
  • Windows 7 Explorer Browser Control supporting both WPF and Windows Forms.
  • Support for Shell property system.
  • Helpers for Windows 7 Taskbar Jumplists, Icon Overlay and Progress bar.
  • Support for Windows Vista and Windows 7 common file dialogs, including custom file dialog controls.
  • Support for Direct3D 11.0 and DXGI 1.0/1.1 APIs.
  • Sensor Platform APIs
  • Extended Linguistic Services APIs
Important links for Windows API Code Pack:
1. More information on Windows API Code Pack
2. Download the Windows API Code Pack

This is really helpful for developing Windows 7 related features in .NET.

Friday, May 15, 2009

Gather all requirements and resources before committing to the client!

This is cool e-mail I received from one of my friends. It just relates to our behavior of committing things to our client even before analyzing and gathering the requirements. Whenever a client ask "I want XYZ functionality. Will I get it?" and the immediate answer from the lead is "Yes Yes! Why not... ". After a couple of days (or months) we find that the words "Yes Yes! Why not... " have made our life miserable. Well, lets go on to the small story...

A new vacuum cleaner salesman knocked on the door on the first house of
the street. A tall lady answered the door.

Before she could speak, the enthusiastic salesman barged into the living
room and opened a big black plastic bag and poured all the cow droppings
onto the carpet.
"Madam, if I cannot clean this up with the use of this new powerful
Vacuum cleaner, I will EAT all this dung!" exclaimed the eager salesman.

"Do you need chilly sauce or ketchup with that" asked the lady.
The bewildered salesman asked, "Why, madam?"

"There's no electricity in the house..." said the lady.

MORAL: Gather all requirements and resources before working on any
project and committing to the client...!!!

SandCastle - An Ultimate Documentation Tool

Most of you are aware of the free documentation tool NDOC which is available at SandCastel was developed on similar lines for generating a rich set of documentation from source assemblies. Sandcastle is a documentation compiler for Managed class library that generates Microsoft-style Help topics, both conceptual and API reference. It creates the API reference documentation from the XML comments that are provided in the code. Moreover it extracts these comments from the managed assembly, which means we can generate the entire documentation from application assemblies. Reflection is used to fetch the comments and other details from the managed assemblies. SandCastel provides a CHMBuilder tool, for generating HTML Help 1.x .chm files. Such tools are lifesavers when customer asks for a detailed documentation at the 11th hour.

You can get more information on SandCastel at the CodePlex site.

Wednesday, May 13, 2009

Windows Server 2008 Server Core

The Server Core edition of the Windows Server 2008 operating system provides a low-maintenance server environment with limited functionality. The Server Core is primarily designed for production systems due to it's minimal installation and high performance. It does not provide a GUI, instead alike Unix it provides a command prompt to work upon. The minimal nature of Server Core has limitations such as:
1. There is no Windows shell, with minimal GUI
2. There is limited managed code support
3. There is limited MSI support (unattended mode only).
4. ASP.NET is not supported (MS is working on the next release of server core to support

The tools on server core are primarily designed to be managed remotely e.g. you can manage server core IIS in two different ways:
1. Use the command prompt on server core
2. Logon remotely and manage the IIS using the GUI on a remote machine

Since server core is minimal on GUI and high on functionality side, it is best suited for production systems. You can find more information on Server Core here.

Tuesday, May 12, 2009

VSTS Architecture Edition Overview

View more presentations from Steve Lange.

How to disable browser's Back button in ASP.NET

Almost every ASP.NET developer face this problem at least once in his entire career. There could be numerous reasons for disabling the browser's back button. The site is secure and the user should not be allowed to go back to the previous page, for online exams site the student should be not allowed to view the question once answered etc could be the reasons for disabling the back button.

What does the browser back button actually do?
The browser maintains a cache of the pages that are visited by the user. Once the user clicks on the back button, the browser flushes the cached version of the page. Now to avoid this situation we can think of two solutions
1. Whenever the user clicks on the "back" button, again redirect the user to the "next" page using the JavaScript: history.forward(1). This JavaScript needs to be written on the onload JavaScript event of the web page. Thus the user will never be able to come to the previous page. But this is not a reliable technique since some of the browser's do not invoke the onload function on pressing the "Back" button.
2. Another solution is to avoid the browser maintaining the cache of the pages that a user visits. Yes, this can be achieved through AsP.NET server side coding. Add the following code to the Page_Load event of ASP.NET web page or MasterPage:
// Disable the Cache
Response.Buffer= true;
Response.Expires =-1500;
Response.CacheControl = “no-cache”;

// Check for your SessionID
if(Session["SessionId"] == null)
Response.Redirect (”Home.aspx”);
This code will disable the cache for the current page. The pages contents are maintained in memory i.e. in buffer. Once the user logout the session and buffer will be cleared. As we are not maintaining the cache for the page, the back button will not work anyways. It means we have successfully disabled the Back button.

Hope this helps you!

Twitter Updater - A Simple Windows Application

What is Twitter?
Twitter is a service for friends, family, and co–workers to communicate and stay connected through the exchange of quick, frequent answers to one simple question: What are you doing? Thus you can keep your friends, family and co-workers updated with what you are doing, simply by updating Twitter. This article will explain, How to post a message to Twitter using your own desktop application.

Use of Twitter Updater Application
For updating twitter you need to logon to and then post your message. For each simple activity, opening a web browser, navigating to and then posting a message becomes a combersome task. To make it easy, I have developed a simple Windows application that can stay on your desktops which will update your twitter in seconds! Frequent users can keep there Twitter username and password in the associated config file. This will eliminate the need of entering the username and password everytime you want to update twitter.

Twitter Updater API's and Screen Shots
The Twitter Updater application makes use of the Twitter Framework API's which can be found at Following are the screen shot's of the application:

Helpful links for Twitter Programming in .Net
Twitter API Documentation:
Twitter Open Source Example:
Online Discussion Group for Twitterizer:

Friday, May 8, 2009

Windows 7 RC1 Released

Microsoft has recently released the RC1 (Release Candidate 1) for Windows 7. The Win7 RC1 has many exciting features and can be downloaded from the Microsoft Website. The RC1 download will be available through July 2009 and unlike the beta release there is no limit on the license copies of RC1 one can have. The RC will expire on June 1, 2010. Starting on March 1, 2010, computer will begin shutting down every two hours. You can find the download instructions here.

One of the exciting features of RC1 is the Windows XP mode. Yes, you heard right! Windows 7 has provided a Windows XP mode for mission critical WinXP apps. It's actually a virtual Windows XP machine with a fully licensed copy of Windows XP SP 3 installed.

Win XP Mode In Win7!

There's much more to come... stay tuned!

Tuesday, May 5, 2009

Seadragon By Microsoft Live Labs

Seadragon developed at Microsoft Live Labs, aims towards making a superior picture experience. It is just like the deep zoom functionality of silverlight and can expand and shrink wall size pictures to mobile size pictures when maintaining their clarity. The Seadragon Deep Zoom functionality can be used in Silverlight application to give a rich look to your applications. Similarly an Ajax version of Seadragon is also available. The following four "promises" of Seadragon has been listed down on the MS Live Labs web site:

1. Speed of navigation is independent of the size or number of objects.
2. Performance depends only on the ratio of bandwidth to pixels on the screen.
3. Transitions are smooth as butter.
4. Scaling is near perfect and rapid for screens of any resolution.

This will definitely pose a challenge to Adobe Flash which was earlier used to show rich picture contents on web UI. Find more information on Seadragon here.

Cheers to Microsoft!

Tuesday, April 28, 2009

4 Liquid Stages of Life

The bitter truth...

Friday, April 24, 2009

ClientID feature of ASP.NET 4.0

One of the new features being added to ASP.NET 4.0 is the ability to control the client side IDs that are generated by the framework. Previously the framework use to generate unique ClientID's for the controls. These ClientID's were generated by combining "ctl00" with the parent container's name like "ctl00_ContainerDIV_ctl01_Textbox1".

The Problem
In the earlier versions of .NET framework, the ClientID's were generated uniquely by the framework. It was much frustratingto use these ClientID's in the JavaScript of web page. If by some mean the develop hard codes the ClientID in the JavaScript and another developer changes the Controls ID, then the JavaScript use to throw error, since it does not recognize the old ClientID which was hard coded. To avoid such scenarios, developers started using server tags for Control.ClientID in JavaScript as shown in the below section.

Old Solution
Each control has a property called ClientID that is a read only and supplies the unique client side ID. This can be used in code behind for dynamically adding client side ID's to scripts. One such example is shown below:

<script type="text/javascript">
function ShowMessage(){
alert('<%= Control.ClientID %>');

ASP.NET 4.0 Solution
There is not really a clean way to use the ClientID property with lots of controls and lots of external script files. With the increase in use of client side scripting and ajax, it became important to make the ClientID property writable. The solution to this problem was found by introducing ClientIDMode property to each control. Depending on various modes the developer will have full control over the client side Id's of a control.

Client ID Modes
There is now a new property on every control (this includes pages and master pages as they inherit from control) called ClientIDMode that is used to select the behavior of the client side ID.

<asp:Label ID="Label1" runat="server" ClientIDMode="[Mode Type]" />

Mode Types
Legacy: The default value if ClientIDMode is not set anywhere in the control hierarchy. This causes client side IDs to behave the way they did in version 2.0,3.0 and 3.5 of the framework. This mode will generate an ID similar to "ctl00_ContainerDIV_ctl01_Textbox1."


<asp :TextBox ID ="txtEcho" runat ="server" Width ="65%" ClientIDMode ="Legacy" />


<input id="ctl00_MasterPageBody_ctl00_txtEcho" style="width: 65%"
name="ctl00$MasterPageBody$ctl00$txtEcho" />

Inherit: This is the default behavior for every control. This looks to the controls parent to get its value for ClientIDMode. You do not need to set this on every control as it is the default, this is used only when the ClientIDMode has been changed and the new desired behavior is to inherit from the controls parent.

Static: This mode does exactly what you think it would, it makes the client side ID static. Meaning that what you put for the ID is what will be used for the client side ID.
[Warning, this means that if a static ClientIDMode is used in a repeating control the developer is responsible for ensuring client side ID uniqueness.]


<asp:TextBox ID="txtEcho2" runat="server" Width="65%" ClientIDMode="Static" />


<input id="txtEcho2" style="width: 65%" name="ctl00$MasterPageBody$ctl00$txtEcho2" />

Predictable: This mode is used when the framework needs to ensure uniqueness but it needs to be done so in a predictable way.The most common use for this mode is on databound controls. The framework will traverse the control hierarchy prefixing the supplied ID with it’s parent control ID until it reaches a control in the hierarchy whose ClientIDMode is defined as static. In the event that the control is placed inside a databound control a suffix with a value that identifies that instance will also be added to the supplied ID. The ClientIDRowSuffix property is used to control the value that will be used as a suffix. This mode will generate an ID similar to "Gridview1_Label1_0".

1. With no ClientIDRowSuffix defined, this is also the behavior for databound controls without a datakeys collection e.g. Repeater Control. Notice that the framework has traversed the control hierarchy and prefixed the ID with the parent’s ID and suffixed the ID with row index.


<asp:GridView ID="EmployeesNoSuffix" runat="server" AutoGenerateColumns="false"
ClientIDMode="Predictable" >
<asp:TemplateField HeaderText="ID">
<asp:Label ID="EmployeeID" runat="server" Text='<%# Eval("ID") %>' />
<asp:TemplateField HeaderText="Name">
<asp:Label ID="EmployeeName" runat="server" Text='<%# Eval("Name") %>' />


<table id="EmployeesNoSuffix" style="border-collapse: collapse" cellspacing="0" rules="all" border="1">
<th scope="col">ID</th>
<th scope="col">Name</th>
<td><span id="EmployeesNoSuffix_EmployeeID_0">1</span></td>
<td><span id="EmployeesNoSuffix_EmployeeName_0">EmployeeName1</span></td>
<td><span id="EmployeesNoSuffix_EmployeeID_8">9</span></td>
<td><span id="EmployeesNoSuffix_EmployeeName_8">EmployeeName9</span></td>

2. With a ClientIDRowSuffix defined, this looks in the control’s datakeys collection for the value and then suffixes the ID with that value.


<asp:GridView ID="EmployeesSuffix" runat="server" AutoGenerateColumns="false"
ClientIDMode="Predictable" ClientIDRowSuffix="ID" >
<asp:TemplateField HeaderText="ID">
<asp:Label ID="EmployeeID" runat="server" Text='<%# Eval("ID") %>' />
<asp:TemplateField HeaderText="Name">
<asp:Label ID="EmployeeName" runat="server" Text='<%# Eval("Name") %>' />


<table id="EmployeesSuffix" style="border-collapse: collapse" cellspacing="0" rules="all" border="1">
<th scope="col">ID</th>
<th scope="col">Name</th>
<td><span id="EmployeesSuffix_EmployeeID_1">1</span></td>
<td><span id="EmployeesSuffix_EmployeeName_1">EmployeeName1</span></td>
<td><span id="EmployeesSuffix_EmployeeID_9">9</span></td>
<td><span id="EmployeesSuffix_EmployeeName_9">EmployeeName9</span></td>

3. With a ClientIDRowSuffix defined, but instead of just one value a compound value will be used. Exhibits the same behavior as one value but it will suffix both values onto the ID.


<asp:GridView ID="EmployeesCompSuffix" runat="server" AutoGenerateColumns="false"
ClientIDMode="Predictable" ClientIDRowSuffix="ID, Name" >
<asp:TemplateField HeaderText="ID">
<asp:Label ID="EmployeeID" runat="server" Text='<%# Eval("ID") %>' />
<asp:TemplateField HeaderText="Name">
<asp:Label ID="EmployeeName" runat="server" Text='<%# Eval("Name") %>' />


<table id="EmployeesCompSuffix" style="border-collapse: collapse" cellspacing="0" rules="all" border="1">
<th scope="col">ID</th>
<th scope="col">Name</th>
<td><span id="EmployeesCompSuffix_EmployeeID_1_EmployeeName1">1</span></td>
<td><span id="EmployeesCompSuffix_EmployeeName_1_EmployeeName1">EmployeeName1</span></td>
<td><span id="EmployeesCompSuffix_EmployeeID_9_EmployeeName9">9</span></td>
<td><span id="EmployeesCompSuffix_EmployeeName_9_EmployeeName9">EmployeeName9</span></td>

Thanks to Microsoft ASP.NET Team for adding this valuable feature!

Thursday, April 23, 2009

Tech-Ed India 2009 Top Architect Contest

Microsoft has launched a Top Architect Contest in India. There are fabulous prizes along with entry to Tech-Ed 2009 which is going to be held at Hyderabad from 13-15 May 2009. This contest will evaluate your architectural skills. The problem statement is given as "India Election Pedia – A DIGITAL MESH". The evaluation of the design will be done by a team of senior architects at Microsoft.

Contest Prizes:

  • The Architects with the TOP-10 entries will receive a MICROSOFT-branded WATCH each.
  • The Architects with the TOP-3 entries will be invited to attend Tech.Ed-India 2009 – scheduled to be held at Hyderabad on 13-15 May, 2009 – for FREE. Benefits will include:
  • Reimbursement of Travel & Stay; and
  • FREE entry to Tech.Ed-India 2009.
  • The two (2) Contest Runners-Up will each receive a WINDOWS MOBILE PHONE.
  • The Contest Winner will receive the Grand Prize of a WINDOWS VISTA LAPTOP.
Details of this contest can be found at here.

Tuesday, April 14, 2009

C# 4.0 New Exciting Features!

New exciting features of C# 4.0. C# has always been an exciting language. In each version of C# we find many exciting and new features that made our life simpler and easier. C# 4.0 brings us exciting features such as:

  • Dynamically typed objects
  • Named and Optional function parameters
  • Covariance and Contravariance

Dynamically Types Objects
Today, in C# we may have the code to get the instance of class Calculator and then invoke the Add() method on that instance to return an integer.

Calculator calc = GetCalculatorInstance();
int sum = calc.Add(10, 20);

Here all the objects are statically typed i.e. the type of the object is known at compile time. Now, let's take an exmaple where the Calculator class resides in an assembly and we want to invoke the Add() method dynamically through C#. When the phrase "dynamically invoke method" comes, we immediately think of Reflection. Yes, you are correct, we will use reflection to invoke the Add() method as shown below:

object calc = GetCalculatorInstance();
Type type = calc.GetType();
object result = type.InvokeMember("Add",
BindingFlags.InvokeMethod, null,
new object[] { 10, 20 });

int sum = Convert.ToInt32(result);

So far so good. Now what if you don't want to play with reflection?
Yes, you can do so in C# 4.0. Making the use of "statically typed dynamic object" you can achieve the same output as that of reflection. Let's look at the code first, which will make our understanding more clear:

dynamic calc = GetCalculatorInstance();
int result = calc.Add(10, 20);

In the above example we are declaring a variable, calc, whose static type is dynamic. Yes, you read that correctly, we have statically typed our object to be dynamic. We'll then be using dynamic method invocation to call the Add() method and then dynamic conversion to convert the result of the dynamic invocation to a statically typed integer.

I would still encourage you to use static types wherever possible, since they are the best for performance reason.

Named and Optional function parameters
Remember C++'s default values for function parameters that made the parameter in question optional? In C# 4.0 you will be able to do that. In previous version of the language to achieve this type of behavior we use to create overloaded methods with varying parameters. By doing this, number of methods would increase keeping the business logic almost same.

Let's assume that we have the following OpenTextFile method along with three overloads of the method with different signatures. The overloads of the primary method will call the primary method with default values for the arguments expected.

4 arguments (Primary method)
public StreamReader OpenTextFile(
string path,
Encoding encoding,
bool detectEncoding,
int bufferSize) { }

3 arguments (Overloaded method)
public StreamReader OpenTextFile(
string path,
Encoding encoding,
bool detectEncoding) { }

2 arguments (Overloaded method)
public StreamReader OpenTextFile(
string path,
Encoding encoding) { }

1 argument (Overloaded method)
public StreamReader OpenTextFile(string path) { }

In C# 4.0 the primary method can be refactored to use optional parameters as shown below:

Single primary method accepting default values
public StreamReader OpenTextFile(
string path,
Encoding encoding = null,
bool detectEncoding = false,
int bufferSize = 1024) { }

It is now possible to call the OpenTextFile method omitting one or more of the optional parameters.

OpenTextFile("foo.txt", Encoding.UTF8);

It is also possible to provide named parameters and as such the OpenTextFile method can be called omitting one or more of the optional parameters while also specifying another parameter by name.

OpenTextFile("foo.txt", Encoding.UTF8, bufferSize: 4098);

Please note that named arguments must be provided last although when provided they can be provided in any order.

Covariance and Contravariance

An operator between types is said to be covariant if it orders these types from more specific to more general ones. Similarly an operator between types is said to be contravariant if it orders them in the reversed order. Whenever neither of these conditions is met, an operator is said to be invariant.
Eric Lippert has a complete 11 part article on Covariance and Contravariance which describes the matter in details with examples. This is what I was looking for since long :)

Apart from the features mentioned above, C# 4.0 is going to bring a lot more for the developers. Keep watching...

Monday, April 13, 2009

Indian Elections: Raise your voice for developing India

Parliamentary elections in India will be held in five phases between April 16 and May 13, India's Election Commission announced Monday. A new Lok Sabha, or lower house of Parliament be constituted before June 2.

These elections are crucial to the growth of India. Foreseeing the problems that India is facing today, it is important to elect a PM who is well educated and who can work towards the growth of country with respect to Education, Agriculture, Information Technology, Defense Technology etc.

When we employ house maid for cooking food, What all we expect from her?
1. She should have good character.
2. She should work dedicate.
3. She should follow clean practices.
4. She should respect time.
5. She should not have any criminal background. She should not steal anything from house.
6. Etc etc etc...

Now, when we expect so many things from a person to whom we are governing and paying a small amount of money, how much can we expect from a Prime Minister of India?

When saying this, it is up to all of us to elect a "good" PM. The elections are on up of our head and we should think of a proper candidate for donating our vote. Most of the candidates have number of court cases pending on them, some for murders, some for rape, some of financial frauds. Don't select a person who is a CRIMINAL. How can a criminal govern a country?

This time let's go ahead and select a developing PM for India!