Friday, September 24, 2010

Geo-Location using IP Address


Geo-Location using IP Address
Many web applications use IP address to find the geographical location of the web site visitors. It has uses from catering relevant advertisements to customizing the behavior of web site. Geo-location using IP address is normally done by using databases. There are two different options available: HostIP.info and MaxMind. HostIP.info has exposed its data through a web service. The access to the web service is free; however, the data is not very accurate. MaxMind offers both commercial solutions as well as free solutions. The free solutions also called the lite versions are not highly accurate but the commercial products are extremely accurate. The code presented in this article has been written to work with both MaxMind's database as well as HostIP.info's web service. The default is to use the HostIP.info. You can change the default, by setting NodeInfo.LocationService property.

//Use Maxmind's Getloaction
NodeInfo.LocationService = new Maxmind.NodeInfoLookupService();

//Use HostIP.info's Geolocation
NodeInfo.LocationService = new HostIP.NodeInfoLookupService();
The LocationService property is of the type INodeInfoLocationService which makes it easy to provide custom lookup services. In a later version the class name of the object to create can come from the configuration file. The INodeInfoLocationService is quite simple with just one method named Lookup.

bool Lookup(IPAddress address, NodeInfo nodeInfo);
The Lookup function takes an IP address and populates a NodeInfo object, which stores the latitude, longitude, city, region and the country name.

The HostIP.info web site exposes its API through HTTP GET at the URL: http://api.hostip.info/. For example, you can get the location of the IP address 207.12.0.9 using the following URL:

http://api.hostip.info?ip=207.12.0.9
The response is returned as XML. The XML gives the latitude, longitude, city, state and country. In the parser implemented in this article, the HTTP GET request is issued through the XMLDocument.Load method.

XmlDocument doc = new XmlDocument();
doc.Load(String.Format("http://api.hostip.info/?ip={0}", address));
The individual elements are extracted using XPath queries. For Example, the country name is extracted using the following query.

node = doc.SelectSingleNode("//hostip:countryName", manager);

if (node != null)
{
nodeInfo.Country = node.InnerText;
}
One element of detail to understand here is how namespaces works with XPath. The argument manager, passed to the SelectSingleNode method, is of type XmlNamespaceManager class. The namespace prefixes are indicated using the following code:

XmlNamespaceManager manager = new XmlNamespaceManager(doc.NameTable);
manager.AddNamespace("gml", "http://www.opengis.net/gml");
manager.AddNamespace("hostip", "http://www.hostip.info/api");
The latitude, longitude, city and the state are extracted in the similar fashion using XPaths.

The MaxMind lookup service is much simpler. MaxMind provides source code of the library that can read its databases. The code can be downloaded from this URL: http://www.maxmind.com/app/api. The Maxmind.NodeInfoLookupService uses the LookupService class provided by MaxMind. I have only made minor changes to the MaxMind's code and most of the MaxMind's code is retained as it is. Here is the code for the wrapper around MaxMind's code.

LookupService svc = new LookupService(Path.Combine(
Path.GetDirectoryName(
this.GetType().Assembly.Location
),
"GeoLiteCity.dat")
);
Location loc = svc.getLocation(address);

if (loc != null)
{
nodeInfo.Region = loc.region;
nodeInfo.Longitude = loc.longitude;
}
You need to download and extract the GeoCity Lite database from MaxMind.com. The compressed archive of the database is available at the following URL: http://www.maxmind.com/app/geolitecity. The GeoLiteCity.dat must be placed in the same directory as the executable.

This was a brief description of the geo-location code. Now, let's examine how the Virtual Earth API is used in the code.

Using the Virtual Earth API
You can get the same mapping feature in your web applications as local.live.com using the Virtual earth API. The Virtual Earth API is JavaScript code and can be loaded in web pages by including a script:


Once the script is loaded, you can use Javascript classes available in the API. A comprehensive API reference is available at the following URL: http://dev.live.com/virtualearth/sdk/. Also, checkout the interactive SDK tutorial available at the same URL. To display the Virtual Earth map in your web page, you need to designate a
element where you want the map image to appear:


Next you will use some JavaScript to load the map into the designated element:

map = new VEMap("mapContainer");
map.LoadMap();
Notice that so far I have only talked about Virtual earth API as a web based API. Since Tracert Map is a windows application, the question arises: how can we use Virtual Earth in windows application? The answer is simple: host the Internet Explorer in the desktop application. Luckily, the .Net Framework 2.0 provides the WebBrowser control. We need to create an HTML page where we can place all the HTML, JavaScript and CSS and load this page into the hosted web browser. An HTML page named tracert.htm needs to be placed in the directory of the executable, so as to make the following code work:

mapBrowser.Navigate(Path.Combine(Path.
GetDirectoryName(
this.GetType().Assembly.Location
),
"tracert.htm")
);
The code shown till now is sufficient to display the Map, but we need to control the map from C# code. The JavaScript code in the web page also needs to invoke C# code. Luckily, the web browser control has a property called ObjectForScripting for this purpose. This property can be assigned any object, the only catch is that the class needs to be marked with the COMVisible attribute.

[ComVisible(true)] //So that scripts can access the methods
public partial class TracertMapForm : Form
A public method in the class declared as public void MapLoaded() can be accessed from JavaScript using window.external.MapLoaded() statement. This is how JavaScript can use C# code, but how about C# code invoking JavaScript? This can be done by using the InvokeScript function. For Example, the code: mapBrowser.Document.InvokeScript("OnStartTrace"); invokes the JavaScript function OnStartTrace declared as: function OnStartTrace. Now that we know how two way communication between script and C# code can be achieved, let's look at the details of virtual earth API.

We will be using Virtual Earth API to mark nodes on the map. This is called a pushpin in the Virtual Earth terminology. In order to do that we need the latitude and the longitude of the node which was obtained by the geolocation service. To display the pushpin we also need the URL of the image which will be used as pushpin. Images of numbers circled in red can obtained at the following URL: http://local.live.com/i/pins/RedCircleXX.gif, where XX can be replaced by a number. For Example the URL http://local.live.com/i/pins/RedCircle10.gif is the image representation of number 10 enclosed in a RedCircle. To represent the nth node as a pushpin in the map, we use the URL: http://local.live.com/i/pins/RedCircle{n}.gif where {n} is replaced by the actual number. Another, interesting feature of a pushpin is that when the mouse hovers over the pushpin it shows a popup as shown in the screenshot below.


The IP address is displayed in the bold text as the title of the popup and the DNS name in lighter text as the body of the popup. Virtual Earth API takes care of displaying the popup. All we need to do is to specify the popup title and the body when creating the pushpin:

var location = new VELatLong(nodeInfo.Latitude, nodeInfo.Longitude);
var imageURL = "http://local.live.com/i/pins/RedCircle" + hopNo + ".gif";
var pushpin = new VEPushpin(hopNo,
location, //pushpin latitude and longitude
imageURL, //pushpin image
nodeInfo.Address, //Pushpin title
nodeInfo.HostName //Pushpin body
);
map.AddPushpin(pushpin);
The final aspect of the Virtual Earth API which is used in the tool is that of polylines. As seen in the first screenshot the nodes on the map are connected through a blue line. This can be created by using VEPolyline JavaScript object. The code to create a transparent blue polyline through an array of latitude and longitude pairs, named nodes, is shown below:

var poly = new VEPolyline("route", nodes);
poly.SetWidth(2);
poly.SetColor(new VEColor(0, 0, 255, 0.5));
map.AddPolyline(poly);
Final Words
I have described briefly how the tool works and its various elements. The tool is still under development and there are lots of features that need to be added. Please note that the pushpins may not represent the locations accurately. This is because the HostIP.info as well as MaxMind lite databases are not highly accurate. The HostIP.info site allows you to correct its database and I recommend you to correct the database and further improve the excellent and free GeoIP database.


Wednesday, September 15, 2010

Scary Unmanned Aerial Vehicles Used by the Military


Scary Unmanned Aerial Vehicles Used by the Military



The X-47B is designed to be launched either from land or catapult-launched from ships, and could be refueled in midair.

Look out, everybody, because here come the unmanned aerial vehicles, otherwise known as UAVs or drones. They’ve been flourishing in the Iraq War, starting with just a few unarmed drones when the conflict began in 2003, and now growing in numbers to more than 7,000. Many are packing serious missiles and bombs, and some soon could be autonomous. This is undoubtedly the dawn of an entirely new era of military might: robot wars. (Pics)

Flying over battlefields in a variety of shapes and sizes, the aircraft are controlled from either the battlefield itself, thousands of miles away, or anywhere in between. They can keep an eye on bad guys wherever they may roam, and some can even blow them up at a moment’s notice. One reason they’re so compelling for military types: They present no danger to their pilots. To help you recognize and identify these scary robotic birds, we picked out a representative sample of six of these these soulless, empty flyers for you to contemplate.


Micro Air Vehicle


Honeywell’s insect-like micro air vehicles, affectionately known by soldiers as a “flying beer keg,” are used for surveillance. The 13-inch hovering devices are small enough to carry in a backpack, yet large enough to carry a camera that’s just right for finding improvised explosive devices. The first generation is too noisy, though, and not reliable enough, so Honeywell is preparing a second generation model that should be ready for the battlefield by next year.

Boeing ScanEagle

This catapult-launched spy plane has a 10-foot wingspan and is 4 feet long, and can stay in the air for more than 20 hours. It’s been used in the Iraq war since 2005. It’s equipped with an image-stabilized high-resolution video system that transmits its signal back to its base, which can be 62 miles away. When it’s finally time for it to land, it flies into a catching system called a SkyHook, consisting of 30- to 50-foot pole that snags the plane’s wingtip. Try that with a manned aircraft!

RQ-4 Global Hawk


Called “one of the most coveted pieces of military technology in the world,” this $35 million remotely piloted aircraft is powered by a turbofan engine and is about the size of a fighter plane. It can fly for 12,600 miles at an altitude of up to 65,000 feet for 30 hours. Used primarily for surveillance, its bulbous nose carries classified sensing devices including various GPS devices, infrared cameras, and synthetic aperture radar (SAR) that can see through clouds and even sandstorms. Sending its data back to its base at 50Mb per second, it can precisely identify exactly where moving targets are located. As of this year, the planes have flown for more than 30,000 combat hours.

MQ-9 Reaper

This is the bad boy of the fleet, the hunter-killer UAV that you don’t want to see flying overhead if you’re a bad guy. Two operators at a base in the desert near Las Vegas control this baby via a satellite link, with one piloting the aircraft and another operating its sensors (too bad there’s a lag of 1.2 seconds for their input to reach the Reaper). Its 950hp turbo prop can fly it to an altitude of 60,000 feet, carrying a payload of about 3,000 pounds. On board: a variety of precision-targeted missiles and bombs, along with a camera that can read a license plate from 2 miles away. These drones are capable of autonomous missions, but the Air Force still insists on using human pilots. For now.

X-37B

With its 29-foot length and 14-foot wingspan, the X-37B is a quarter the size of the Space Shuttle. But unlike the shuttle, most of its functions are top secret. In fact, the Air Force didn’t want anyone to know anything about the specific mission of this unmanned military vehicle, which for now will probably be used for various types of surveillance. Strangely enough, the only reason we know anything about it is because it was spotted orbiting high overhead by amateur satellite watchers, who determined it must be on a spy mission because of its telltale routine of passing over the same location on the ground every four days.

X-47B

Behold the future of aerial drones — this one could be the scariest of them all. It’s designed to be launched either from land or catapult-launched from ships, and could be refueled in midair, letting it fly indefinitely. This stealthy 19-foot-long aircraft will have the smarts to carry out its own missions, and the oomph to carry 4,500 pounds of bombs, missiles, and surveillance gear. There’s even talk of loading it up with lasers and microwave weapons. Look out, everyone, because this is the flying killer robot of your nightmares.

Tuesday, September 14, 2010

Converting a Cisco IP Phone from SCCP (Skinny) to SIP Firmware


Converting a Cisco IP Phone from SCCP (Skinny) to SIP Firmware
Categories
Announcements Asterisk Business VoIP Fax over IP Mobile VoIP Open Source VoIP Small Business VoIP Technical Advice trixbox VoIP Commentary VoIP Education VoIP Gateways VoIP Hardware VoIP Interviews VoIP News VoIP Phones VoIP Reviews VoIP Service VoIP Software VoIP Systems
VoIP Buyers Guides
VoIP Systems VoIP Phones VoIP Adapters VoIP Gateways VoIP Headsets Open Source PBX VoIP Networks
Recent posts Recent comments Digium PCI Cards 101 – help for the rest of us….. Uh oh…We let Tom loose. Benefits of 3CX Polycom VVX1500 Overview Upgrading your existing network for VoIP Skype Would Make Great Acquisition for Cisco Polycom HDX 6004 Overview How to Select a Hosted Service Provider
Newsletter
Subscribe to our newsletter and receive weekly updates from VoIP Insider .


Archives
September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 January 2010 December 2009 November 2009 October 2009 September 2009 Tech Tip: Converting a Cisco IP Phone from SCCP (Skinny) to SIP Firmware
Posted by Cory Andrews on April 3rd, 2009 in Technical Advice, VoIP Phones4 Comments


Cisco IP Phones are amongst the most popular desktop IP phones out there. By default, Cisco ship their phones from the factory pre-loaded with their proprietary SCCP protocol firmware (also commonly referred to as “skinny”).

If you are running Asterisk, Trixbox, Switchvox or any other standards-based SIP platform or hosted service, you’ll need to migrate your Cisco phone(s) from their native SCCP (skinny) load to SIP in order to use them. While this is not a particularly difficult procedure, it can be frustrating for those who have never attempted the process.

For the purposes of this exercise, we’re using a Cisco CP-7960G. The process may be slightly different depending upon the specific model of Cisco IP phone you are working with.

Cisco 7940/7960 IP phones can support either the Skinny Call Control Protocol (SCCP), Session Initiation Protocol (SIP), or the Media Gateway Control Protocol (MGCP), but not more than one simultaneously. This is possible because they load different firmware versions on bootup. This functionality is transparent to the end user, and you enable it through changes to the basic text−based configuration files that the phones download from a Trivial File Transfer Protocol (TFTP) server.

First, a few prerequisites:

A – You’ll need a CCO login for Cisco.com in order to obtain the latest SIP firmware. The easiest way to obtain a CCO login is to purchase a Smartnet maintenance contract for your Cisco IP phone from an authorized Cisco reseller. Once you have a registered Smartnet, you can obtain CCO login credentials and access the firmware downloads section of Cisco’s website. Expect to pay $8-$15 for a Smartnet contract.

B – You should have a comfort level with basic networking concepts and TFTP setup/administration.

Follow these steps to enable SIP functionality:

Step #1
Download these files from Cisco SIP IP Phone 7940/7960 Software ( registered customers only) and place them in the root directory of your TFTP server (tftpboot on a UNIX machine):

P0S30100.bin This is the SIP image. You’ll want to download the file in binary format, to ensure that it is not corrupted. Note: There are many different variations of this file, depending on the version of software that you are loading.These are some examples:
SIP Release 2.3 or earlier: P0S3xxyy.bin The xx variable is the version number, and yy is the sub−version number.
SIP Release 3.0 and later: P0S3−xx−y−zz.bin The xx variable is the major version number, y is the minor version number, and zz is the sub−version number.
SIP Release 5.0 and later: After this version has been installed, you will not be able to revert back to versions earlier than 5.0. You may still change from SCCP images to SIP images, but they both must be version 5.0 or later. For more information on this, refer to Release Notes for Cisco SIP IP Phone 7940/7960 Release 5.0.
This table describes the meanings of the first 4 characters in the binary files names. Note: To verify which image the phone is using, choose Settings > Status > Firmware Versions. Different phone models use different processors. This fourth digit can help determine the model of phone for which the file is used.

OS79XX.TXT—This file tells the Cisco 7940/7960 which binary to download from the TFTP server. This file is case sensitive and must only contain the name of the file that you want to load, without the .bin extension. For example, if you attempt to load the SIP version 2.3 software, it must contain only the line P0S30203. If you try to load versions 3.0 and later, the file name must be in the format P0S3-xx-y-zz. For example, if you attempt to load the SIP version 7.1 software, OS79XX.TXT must contain the line P0S3-07-1-00. The binary referenced here must also be present in the TFTP root directory. Without this file, the phone does not know which file it needs to retrieve, in order to replace its existing software.
SIPDefaultGeneric.cnf—This file is an example of a default configuration file. This file contains configuration information relevant to all phones.
SIPConfigGeneric.cnf—This file is similar to the previous one, except that it contains information relevant to a specific phone instead of to all phones.
RINGLIST.DAT—Lists audio files that are the custom ring type options for the phones. The audio files listed in the RINGLIST.DAT file must also be in the root directory of the TFTP server.
ringer1.pcm—This file is a sample ring tone that is used by the Cisco 7940/7960.
OS79XX.TXT—This file always contains the universal application loader image.
P003………bin—Nonsecure universal application loader for upgrades from images earlier than 5.x.
P003………sbn—Secure universal application loader for upgrades from images 5.x or later.
P0a3………loads—File that contains the universal application loader and application image, where a represents the protocol of the application image LOADS file: 0 for SCCP, and S for SIP.
P0a3………sb2—Application firmware image, where a represents the application firmware image: 0 for SCCP, and S for SIP.
Step #2
With a text editor (vi or Notepad), rename the file SIPDefaultGeneric.cnf to SIPDefault.cnf (used for global parameters on all phones).

Step #3
With a text editor, rename the file SIPConfigGeneric.cnf to SIPmac_address.cnf, for each phone (for example, SIP002094D245CB.cnf). The MAC address must be specified in capital letters and the extension (.cnf) must be in lower case. The MAC address of the phone can be found on the sticker that is located on the bottom of the phone, or it can be found through the phone LCD screen (choose Settings > Network Configuration > MAC Address). Note: Allow read and write file permissions on the TFTP server for those files.

Step #4
Unplug the power cord or Ethernet cord (if inline power is used) in order to reset the phones. Ensure that the phones can find the TFTP server. Manually configure the phone’s IP address, gateway address, and TFTP server address; or configure the phone network settings from the Dynamic Host Configuration Protocol (DHCP) server. It is recommended that you not use the TFTP server on the Cisco CallManager, if you have one in your current system.

Step #5
Manually Configure the Phone Network Settings

Complete these steps in order to manually configure the phone network settings:

Press the **# buttons in order to unlock the phone. (This step either locks or unlocks the options, based on the current state.)
Press Settings.
Press the down arrow in order to select Network Configuration and press the Select softkey. There is an unlocked padlock icon in the upper-right portion of your LCD.
Use the toggle button and the arrow keys in order to modify any parameters. When you enter IP addresses, the * key is used for decimal points.
Press the Save softkey in order to save your changes.
Step #6 (Optional)
You can also configure the phone network settings from the Dynamic Host Configuration Protocol (DHCP) server. For SIP phones, make sure that the DHCP server uses Option 66 for the TFTP server. These DHCP options are usually configured from the DHCP server:

IP Address (DHCP Option 50)
Subnet Mask (DHCP Option 1)
Default IP Gateway (DHCP Option 3)
DNS Server Address (DHCP Option 6)
TFTP Server (DHCP Option 66)
Domain Name (DHCP Option 15). Note: Cisco CallManager uses Option 150 for the TFTP server, while SIP phones expect Option 66 for the TFTP server.

Friday, August 27, 2010

POCOS IN VS.NET 2010


The Entity Framework enables you to use custom data classes together with your data model without making any modifications to the data classes themselves. This means that you can use "plain-old" CLR objects (POCO), such as existing domain objects, with your data model. These POCO data classes (also known as persistence-ignorant objects), which are mapped to entities that are defined in a data model, support most of the same query, insert, update, and delete behaviors as entity types that are generated by the Entity Data Model tools.


The Entity Framework supports POCO classes ("plain-old" CLR objects). If you want to enable lazy loading for POCO entities and to have the Entity Framework track changes in your classes as the changes occur, your POCO classes must meet the requirements described in this topic so that the Entity Framework can create proxies for your POCO entities during run time. The proxy classes derive from your POCO types.
The Entity Framework creates proxies for POCO entities if the classes meet the requirements described below. POCO entities can have proxy objects that support change tracking or lazy loading. You can have lazy loading proxies without meeting the requirements for change tracking proxies, but if you meet the change tracking proxy requirements, then the lazy loading proxy will be created as well. You can disable lazy loading by setting the LazyLoadingEnabled option to false.

For either of these proxies to be created:

A custom data class must be declared with public access.


A custom data class must not be sealed (NotInheritable in Visual Basic)


A custom data class must not be abstract (MustInherit in Visual Basic).


A custom data class must have a public or protected constructor that does not have parameters. Use a protected constructor without parameters if you want the CreateObject method to be used to create a proxy for the POCO entity. Calling the CreateObject method does not guarantee the creation of the proxy: the POCO class must follow the other requirements that are described in this topic.


The class cannot implement the IEntityWithChangeTracker or IEntityWithRelationships interfaces because the proxy classes implement these interfaces.


The ProxyCreationEnabled option must be set to true.


For lazy loading proxies:

Each navigation property must be declared as public, virtual (Overridable in Visual Basic), and not sealed (NotOverridable in Visual Basic) get accessor. The navigation property defined in the custom data class must have a corresponding navigation property in the conceptual model. For more information, see Loading Related POCO Entities.


For change tracking proxies:

Each property that is mapped to a property of an entity type in the data model must have non-sealed (NotOverridable in Visual Basic), public, and virtual (Overridable in Visual Basic) get and set accessors.


A navigation property that represents the "many" end of a relationship must return a type that implements ICollection, where T is the type of the object at the other end of the relationship.


If you want the proxy type to be created along with your object, use the CreateObject method on the ObjectContext when creating a new object, instead of the new operator.

The ADO.NET Entity Framework enables developers to create data access applications by programming against a conceptual application model instead of programming directly against a relational storage schema. The goal is to decrease the amount of code and maintenance required for data-oriented applications. Entity Framework applications provide the following benefits:
Applications can work in terms of a more application-centric conceptual model, including types with inheritance, complex members, and relationships.
Applications are freed from hard-coded dependencies on a particular data engine or storage schema.
Mappings between the conceptual model and the storage-specific schema can change without changing the application code.
Developers can work with a consistent application object model that can be mapped to various storage schemas, possibly implemented in different database management systems.

To add the ADO.NET Entity Data Model item template
On the Project menu, click Add new item.

In the Templates pane, select ADO.NET Entity Data Model.

Type AdventureWorks.edmx for the model name and then click Add.

The first page of the Entity Data Model Wizard is displayed.

To generate the .edmx file
In the Choose Model Contents dialog box, select Generate from database. Then click Next.

Click the New Connection button.

In the Connection Properties dialog box, type your server name, select the authentication method, type AdventureWorks for the database name, and then click OK.

The Choose Your Data Connections dialog box is updated with your database connection settings.

Ensure that the Save entity connection settings in App.Config as: checkbox is checked and the value is set to AdventureWorksEntities. Then click Next.

In the Choose Your Database Objects dialog box, clear all objects, expand Tables, and select the following table objects:

Address
Contact
Product
SalesOrderHeader
SalesOrderDetail
Click Finish to complete the wizard.

The wizard does the following:

Adds references to the System.Data.Entity, System.Runtime.Serialization, and System.Security namespaces.


Generates the AdventureWorks.edmx file that defines the models and mapping.


Creates a source code file that contains the classes that were generated based on the conceptual model content of the .edmx file. You can view the source code file by expanding the .edmx file in Solution Explorer.

Multiple conceptual models can be mapped to a single storage schema.
Language-integrated query (LINQ) support provides compile-time syntax validation for queries against a conceptual model.
To configure a Visual Studio project to use the AdventureWorks Sales Model
In Solution Explorer, add assembly references to System.Data.Entity.dll and System.Runtime.Serialization.dll.
Add the following model and mapping files to the project:
AdventureWorks.csdl
AdventureWorks.msl
AdventureWorks.ssdl

For information about creating these files, see How to: Manually Define the Model and Mapping Files.
Select the three files you just added to the project directory. On the Project menu, click Include In Project.
Select the three files you added to the project directory. On the Project menu, click Properties.
In the Properties pane, set Copy to Output Directory to Copy if newer.
Open the project's application configuration file (App.config) and add the following connection string:


connectionString="metadata=.\AdventureWorks.csdl|.\AdventureWorks.ssdl|.\AdventureWorks.msl;
provider=System.Data.SqlClient;provider connection string='Data Source=localhost;
Initial Catalog=AdventureWorks;Integrated Security=True;Connection Timeout=60;
multipleactiveresultsets=true'" providerName="System.Data.EntityClient" />

If your project does not have an application configuration file, you can add one by selecting Add New Item from the Project menu, selecting the General category, selecting Application Configuration File, and then clicking Add.

At the command prompt in your project directory, run the appropriate command for your project (with line breaks removed):

"%windir%\Microsoft.NET\Framework\v4.0\edmgen.exe" /mode:EntityClassGeneration
/incsdl:.\AdventureWorks.csdl /outobjectlayer:.\AdventureWorks.Objects.vb /language:VB
This generates an object layer file in either C# or Visual Basic that is based on the conceptual model.

Add the object-layer file generated in the previous step to your project.

In the code page for your application, add the following using statements (Imports in Visual Basic):

Imports System
Imports System.Linq
Imports System.Collections.Generic
Imports System.Text
Imports System.Data
Imports System.Data.Common
Imports System.Data.Objects
Imports System.Data.Objects.DataClasses

If you use the Entity Data Model Wizard in a Visual Studio project, the wizard automatically generates an .edmx file and configures the project to use the Entity Framework . For more information, see How to: Use the Entity Data Model Wizard. You can also manually configure a Visual Studio project to use the Entity Framework . Do this if you have manually defined the model and mapping files or defined them by using the EDM Generator (EdmGen.exe) utility.

The examples in this topic use the model and mapping files for the AdventureWorks Sales model. The AdventureWorks Sales Model is used throughout the task-related topics in the Entity Framework documentation.

MODEL VIEW CONTROLLOR AND POCOS IN ASP.NET

MODEL [ CLASS/FUNCTION] VIEW [ USER INTERFACE AND CLIENT SIDE VALIDATION] CONTROLLOR [ EVENT TO USE EACH MODEL ACCORDING TO USER INPUT EVENT IN ASP.NET

Model view controller is nothing but a design pattern used to achieve customizability in our application. Change is the only thing in the world, which will never change. All the products that we develop to our clients will undergo many changes. To accommodate these changes we should concentrate more on our design. Directly jumping in to the code may give quick solutions to our problem but they will not solve our future problems of customizability and re-usability. So friends in this article we will discuss about MVC a most popular design pattern, which helps us to overcome most of our problems. Initially we may feel that this design pattern will need more time before starting the development. Yes it is true, but the time, which we spend on design, will give some fruitful benefits.

Good Layering Approach

MVC follows the most common approach of Layering. Layering is nothing but a logical split up of our code in to functions in different classes. This approach is well known and most accepted approach. The main advantage in this approach is re-usability of code. Good example of layering approach is UI, Business Logic, and Data Access layer. Now we need to think how we can extend this approach to give us another great advantage customizability. The answer to this is using Inheritance. Inheritance is one of the powerful concepts in oops. .Net supports this in a nice way.

Visual Inheritance in .Net

Before getting in to Visual Inheritance, we will discuss the basic difference between Inheritance and Interfaces.

Inheritance -- what it is?
Interfaces -- How it should be? (It’s a contract)

Both these answers target the classes, which we are designing in our application. Many of us know this definition but we will not use this extensively in our application including me. Unless we work in a design, which uses these concepts to a great extent, we will not agree this. Many of the Microsoft products use this to a great extent. For example how Commerce Server pipeline components achieves the customizability. Commerce Server allows us to create our own component in com and allow us to add it as one of a stage in the pipeline. How they achieve this, only through interfaces. We need to create a com component and the com component should implement a interface. All the objects in our .Net Framework class libraries are inherited from System.Object. This tells us the importance of Inheritance.

Jumping into Visual Inheritance

Let's try out one of .Net most interesting feature, Visual Inheritance. What exactly is it? Now we do know what is inheritance right? No I don't mean the money that you get from your parents/grand parents, I'm talking about class and interface inheritance, which saves you the headache of re-typing code and provides the luxury of code reusability. Now wouldn't it be nice if the same feature that applies to classes and their methods could also be applied to the GUI forms that we create, I mean create a base form with our corporate logo so that it appears on all of the firm's screens? Well such a feature now exists! Yes, you've guessed it, its Visual Inheritance! Visual Inheritance alone will not solve our problem of extensibility. This will help us to start digging our mind for solution. Cool guys now we will directly jump in to our problem with a good example.

Problem Statement

I have an employee master screen, which takes Name and Age as inputs and save the same in our database.



Lets assume this is my basic functionality in my product, which serves most of my customer’s requirement. Now let us assume one of my new client asks for a change in this form. He needs an additional field in this form, which takes the employee’s address and stores the same in the database. How will we achieve this? Normally we will create a new screen which will also has a textbox to get the address input, in the same time we will also add another column in the employee master table and supply the new binaries and modified table and stored procedure scripts to the client. Do you think this is the right approach? If you ask me this is one of the crude way to satisfy an customer. Requirement like this will be very easy to re-write the entire code. But think if a customer asks a change in Payroll Calculation logic. Re-writing code for some specific customers will not helps us in long run. This will end up in maintaining separate – separate Visual source safe for each and every client. These types of solutions are very difficult to handle and after some time we will feel that we have a messy code.

The right way to solve this problem is by having a good design pattern in place and make sure that the entire team clearly understands the design and implements the same in their code. We can solve this by doing a layered approach using MVC and Visual Inheritance.

Solution to our problem

1. Don’t alter the table add another table which stores the additional columns like address and the junction/extended table should have a foreign key relationship with our main employee table.
2. Create a new Inherited form, which inherits from our main employee master screen. To use Visual Inheritance we need to change the Access Modifiers to Protected from Friend. By default VS .Net puts the access modifier as friend.

View Layer

Our View /UI layer should only have UI related validations in it. We should not have any Business Logic in to it. This gives us the flexibility to change the UI at any time and we can also have different UI for different customers. We can even have web based UI for some of the clients.

Controller /Director

Controller is the layer, which responds to the events in the UI. For example Save button click on my employee master screen. This layer should act as an intermediate between our View and Model. Initially we may think that this layer is not necessary. I am also not convinced with this layer. But still we need to think. May be after some days we will get answer for this. This layer will act as just event redirector.

Model

This layer has all our business logic. This is the most important layer. This layer will have our core functionality. This layer should be designed in such a way that out core and complex logic should be designed as functions and these function should be marked as overridable so that the inheriting classes can re-use the logic or override the logic. We should not make all the functions in the layer as overridable this may raise some security threats.

Database Operations

All the database operations should be done in the Model base class. All the inheriting classes should call this method to do database updates. The design can be like this. In my EmpModel base class I will have a protected array list which will store all the objects which needs to get updated. All the classes which inherits this class should add there objects to this array list and they should call the base class Update method. This helps us to do all the db operations in a single transaction.

For example in our example, we should create an EmployeeBase class, which will have properties for name and age. Our EmpModelBase should have a new instantiated EmployeeBase object and our view should fill the object properties. Finally the view will call the controller’s Save method and the controller should call the Model save method, there we will add the Employee object to the array list and the function should call the mybase.Update method. This method should loop through the array list and it should fire corresponding db update statements. This is just an example. We need to enhance this depending upon our requirement.

Conclusion

1. Layering Approach helps us a lot and we need to enhance it to get full customizability.
2. We need to enhance this with all our design knowledge.
3. No IDE enforces these patterns, it is up to us to do clean and disciplined way of coding.
4. Once we are used to these approaches / patterns then we are addicted to it.
In Entity Framework 3.5 (.NET 3.5 SP1), there are more than a few restrictions that were imposed on entity classes. Entity classes in EF needed to either be sub classes of EntityObject, or had to implement a set of interfaces we collectively refer to as IPOCO – i.e. IEntityWithKey, IEntityWithChangeTracker and IEntityWithRelationships. These restrictions made it difficult if not downright impossible to build EF friendly domain classes that were truly independent of persistence concerns. It also meant that the testability of the domain classes was severely compromised.

All of this changes dramatically with the next release of Entity Framework: 4.0 (.NET Framework 4.0). Entity Framework 4.0 introduces support for Plain Old CLR Objects, or POCO types that do not need to comply with any of the following restrictions:



Inheriting from a base class that is required for persistence concerns
Implementing an interface that is required for persistence concerns
The need for metadata or mapping attributes on type members
For instance, in Entity Framework 4.0, you can have entities that are coded as shown:

public class Customer
{
public string CustomerID { get; set; }
public string ContactName { get; set; }
public string City { get; set; }
public List Orders { get; set; }
}

public class Order
{
public int OrderID { get; set; }
public Customer Customer { get; set; }
public DateTime OrderDate { get; set; }
}You can then use the Entity Framework to query and materialize instances of these types out of the database, get all the other services offered by Entity Framework for change tracking, updating, etc. No more IPOCO, no more EntityObject - just pure POCO.
There’s quite a bit to discuss here, including:

Overall POCO experience in Entity Framework 4.0
Change Tracking in POCO
Relationship Fix-up
Complex Types
Deferred (Lazy) Loading and Explicit Loading
Best Practices
In this post, I will focus primarily on the overall experience so that you can get started with POCO in Entity Framework 4.0 right away. I’d like to use a simple example that we can walk through so you can see what it feels like to use POCO in Entity Framework 4.0. I will use the Northwind database, and we’ll continue to build on this example in subsequent posts.

Step 1 – Create the Model, turn off default Code Generation

While POCO allows you to write your own entity classes in a persistence ignorant fashion, there is still the need for you to “plug in” persistence and EF metadata so that your POCO entities can be materialized from the database and persisted back to the database. In order to do this, you will still need to either create an Entity Data Model using the Entity Framework Designer or provide the CSDL, SSDL and MSL metadata files exactly as you have done with Entity Framework 3.5. So first I’ll generate an EDMX using the ADO.NET Entity Data Model Wizard.

Create a class library project for defining your POCO types. I named mine NorthwindModel. This project will be persistence ignorant and will not have a dependency on the Entity Framework.
Create a class library project that will contain your persistence aware code. I named mine NorthwindData. This project will have a dependency on Entity Framework (System.Data.Entity) in addition to a dependency on the NorthwindModel project.
Add New Item to the NorthwindData project and add an ADO.NET Entity Data Model called Northwind.edmx (doing this will automatically add the dependency to the Entity Framework).
Go through “Generate from Database” and build a model for the Northwind database.
For now, select Categories and Products as the only two tables you are interested in adding to your Entity Data model.
Now that I have my Entity Data model to work with, there is one final step before I start to write code : turn off code generation. After all you are interested in POCO – so remove the Custom Tool that is responsible for generating EntityObject based code for Northwind.edmx. This will turn off code generation for your model.

We are now ready to write our POCO entities.

Step 2 – Code up your POCO entities

I am going to write simple POCO entities for Category and Product. These will be added to the NorthwindModel project. Note that what I show here shouldn’t be taken as best practice and the intention here is to demonstrate the simplest case that works out of the box. We will extend and customize this to our needs as we go forward and build on top of this using Repository and Unit of Work patterns later on.

Here’s sample code for our Category entity:

public class Category
{
public int CategoryID { get; set; }
public string CategoryName { get; set; }
public string Description { get; set; }
public byte[] Picture { get; set; }
public List Products { get; set; }
}Note that I have defined properties for scalar properties as well as navigation properties in my model. The Navigation Property in our model translates to a List.




NEW .NET FRAMEWORK- ENTITY FRAMEWORK- A primary goal of the upcoming version of ADO.NET is to raise the level of abstraction for data programming, thus helping to eliminate the impedance mismatch between data models and between languages that application developers would otherwise have to deal with. Two innovations that make this move possible are Language-Integrated Query and the ADO.NET Entity Framework. The Entity Framework exists as a new part of the ADO.NET family of technologies. ADO.NET will LINQ-enable many data access components: LINQ to SQL, LINQ to DataSet and LINQ to Entities.
Every business application has, explicitly or implicitly, a conceptual data model that describes the various elements of the problem domain, as well as each element's structure, the relationships between each element, their constraints, and so on.

Since currently most applications are written on top of relational databases, sooner or later they'll have to deal with the data represented in a relational form. Even if there was a higher-level conceptual model used during the design, that model is typically not directly "executable", so it needs to be translated into a relational form and applied to a logical database schema and to the application code.

While the relational model has been extremely effective in the last few decades, it's a model that targets a level of abstraction that is often not appropriate for modeling most business applications created using modern development environments.

Let's use an example to illustrate this point. Here is a fragment of a variation of the AdventureWorks sample database that's included in Microsoft SQL Server 2005:

If we were building a human-resources application on top of this database and at some point wanted to find all of the full-time employees that were hired during 2006 and list their names and titles, we'd have to write the following SQL query:

SELECT c.FirstName, e.Title
FROM Employee e
INNER JOIN Contact c ON e.EmployeeID = c.ContactID
WHERE e.SalariedFlag = 1 AND e.HireDate >= '2006-01-01'
This query is more complicated than it needs to be for a number of reasons:

While this particular application only deals with "employees", it still has to deal with the fact that the logical database schema is normalized so the contact information of employees—e.g. their names—is in a separate table. While this does not concern the application, developers would still need to include this knowledge in all queries in the application that deal with employees. In general, applications can't choose the logical database schema (for example, departmental applications that expose data from the company's core system database), and the knowledge of how to map the logical schema to the "appropriate" view of the data that the application requires is implicitly expressed through queries throughout the code.
This example application only deals with full-time employees, so ideally one should not see any other kind of employees. However, since this is a shared database, all employees are in the Employee table, and they are classified using a "SalariedFlag" column; this, again, means that every query issued by this application will embed the knowledge of how to tell apart one type of employee from the other. Ideally, if the application deals with a subset of the data, the system should only present that subset of the data, and the developers should be able to declaratively indicate which is he appropriate subset.
The problems highlighted above are related to the fact that the logical database schema is not always the right view of the data for a given application. Note that in this particular case a more appropriate view could be created by using the same concepts used by the existing schema (that is, tables and columns as exist in the relational model). There are other issues that show up when building data-centric applications that are not easily modeled using the constructs provided by the relational model alone.

Let's say that another application, this time the sales system, is also built on top of the same database. Using the same logical schema we used in the previous example, we would have to use the following query to obtain all of the sales persons that have sales orders for more than $200,000:

SELECT SalesPersonID, FirstName, LastName, HireDate
FROM SalesPerson sp
INNER JOIN Employee e ON sp.SalesPersonID = e.EmployeeID
INNER JOIN Contact c ON e.EmployeeID = c.ContactID
INNER JOIN SalesOrder o ON sp.SalesPersonID = o.SalesPersonID
WHERE e.SalariedFlag = 1 AND o.TotalDue > 200000
Again, the query is quite complicated compared to the relatively simple question that we're asking at the conceptual level. The reasons for this complexity include:

Again, the logical database schema is too fragmented, and it introduces complexity that the application doesn't need. In this example, the application is probably only interested in "sales persons" and "sales orders"; the fact that the sales persons' information is spread across 3 tables is uninteresting, but yet is knowledge that the application code has to have.
Conceptually, we know that a sales person is associated to zero or more sales orders; however, queries need to be formulated in a way that can't leverage that knowledge; instead, this query has to do an explicit join to walk through this association.
In addition to the issues pointed out above, both queries present another interesting problem: they return information about employees and sales persons respectively. However, you cannot ask the system for an "employee" or a "sales person". The system does not have knowledge of what that means. All the values returned from queries are simply projections that copy some of the values in the table rows to the result-set, losing any relationship to the source of the data. This means that there is no common understanding throughout the application code about the core application concepts such as employee, or can it adequately enforce constraints associated with that concept. Furthermore, since the results are simply projections, the source information that describes where the data came from is lost, requiring developers to explicitly tell the system how inserts, updates and deletes should be done by using specific SQL statements.

The issues we just discussed fall into two main classes:

Those related to the fact that the logical (relational) model and related infrastructure cannot leverage the conceptual domain knowledge of the application data model, hence it is not able to understand business entities, their relationships among each other, or their constraints.
Those related to the practical problem that databases have logical schemas that typically do not match the application needs; those schemas often cannot be adapted because they are shared across many applications or due to non-functional requirements such as operations, data ownership, performance or security.
The issues described above are very common across most data-centric enterprise applications. In order to address these issues ADO.NET introduces the Entity Framework, which consists of a data model and a set of design-time and run-time services that allow developers to describe the application data and interact with it at a "conceptual" level of abstraction that is appropriate for business applications, and that helps isolate the application from the underlying logical database schemas.

Modeling Data at the Conceptual Level of Abstraction: The Entity Data Model
In order to address the first issue identified in the previous section what we need is a way of describing the data structure (the schema) that uses higher-level constructs.

The Entity Data Model—or EDM for short—is an Entity-Relationship data model. The key concepts introduced by the EDM are:

Entity: entities are instances of Entity Types (e.g. Employee, SalesOrder), which are richly structured records with a key. Entities are grouped in Entity-Sets.
Relationship: relationships associate entities, and are instances of Relationship Types (e.g. SalesOrder posted-by SalesPerson). Relationships are grouped in Relationship-Sets.
The introduction of an explicit concept of Entity and Relationship allows developers to be much more explicit when describing schemas. In addition to these core concepts, the EDM supports various constructs that further extend its expressivity. For example:

Inheritance: entity types can be defined so they inherit from other types (e.g. Employee could inherit from Contact). This kind of inheritance is strictly structural, meaning that there is no "behavior" inherited as it happens in object-oriented programming languages. What's in inherited is the structure of the base entity type; in addition to inheriting its structure, a instances of the derived entity type satisfy the "is a" relationship when tested against the base entity type.
Complex types: in addition to the usual scalar types supported by most databases, the EDM supports the definition of complex types and their use as members of entity types. For example, you could define an Address complex type that has StreetAddress, City and State properties and then add a property of type Address to the Contact entity type.
With all of these new tools, we can re-define the logical schema that we used in the previous section using a conceptual model:



LINQ to Entities: Language-Integrated Query
Despite the great advancements in integration of databases and development environments, there is still an impedance mismatch between the two that's not easily solved by just enhancing the libraries and APIs used for data programming. While the Entity Framework minimizes the impedance mismatch between logical rows and objects almost entirely, the integration of the Entity Framework with extensions to existing programming languages to naturally express queries within the language itself helps to eliminate it completely.

More specifically, most business application developers today have to deal with at least two programming languages: the language that's used to model the business logic and the presentation layer—which is typically a high-level object-oriented language such as C# or Visual Basic- and the language that's used to interact with the database—which is typically some SQL dialect.

Not only does this mean that developers have to master several languages to be effective at application development, but this also introduces seams throughout the application code whenever there are jumps between the two environments. For example, in most cases applications execute queries against databases by using a data-access API such as ADO.NET and specifying the query in quotes inside the program; since the query is just a string literal to the compiler, it's not checked for appropriate syntax or validated to make sure that it references existing elements such as tables and column names.

Addressing this issue is one of the key themes of the next round of the Microsoft C# and Visual Basic programming languages.

Language-Integrated Query
The next generation of the C# and Visual Basic programming languages contain a number of innovations around making it easier to manipulate data in application code. The LINQ project consists of a set of extensions to these languages and supporting libraries that allow users to formulate queries within the programming language itself, without having to resort to use another language that's embedded as string literals in the user program and cannot be understood or verified during compilation.

Queries formulated using LINQ can run against various data sources such as in-memory data structures, XML documents and through ADO.NET against databases, entity models and DataSets. While some of these use different implementations under the covers, all of them expose the same syntax and language constructs.

The actual syntax details for queries are specific to each programming language, and they remain the same across LINQ data sources. For example, here is a Visual Basic query that works against a regular in-memory array:

Dim numbers() As Integer = {5, 7, 1, 4, 9, 3, 2, 6, 8}

Dim smallnumbers = From n In numbers _
Where n <= 5 _
Select n _
Order By n

For Each Dim n In smallnumbers
Console.WriteLine(n)
Next
Here is the C# version of the same query:

int[] numbers = new int[] {5, 7, 1, 4, 9, 3, 2, 6, 8};

var smallnumbers = from n in numbers
where n <= 5
orderby n
select n;

foreach(var n in smallnumbers) {
Console.WriteLine(n);
}
Queries against data sources such as entity models and DataSets look the same syntactically, as can be seen in the sections below.

For more background and further details on the LINQ project see [LINQ] in the references section.

LINQ and the ADO.NET Entity Framework
As we discussed in the section on ADO.NET Entity Framework Object Services, the upcoming version of ADO.NET includes a layer that can expose database data as regular .NET objects. Furthermore, ADO.NET tools will generate .NET classes that represent the EDM schema in the .NET environment. This makes the object layer an ideal target for LINQ support, allowing developers to formulate queries against a database right from the programming language used to build the business logic. This capability is known as LINQ to Entities.

For example, earlier in the document we discussed this code fragment that would query for objects in a database:

using(AdventureWorksDB aw = new
AdventureWorksDB(Settings.Default.AdventureWorks)) {
Query newSalesPeople = aw.GetQuery(
"SELECT VALUE sp " +
"FROM AdventureWorks.AdventureWorksDB.SalesPeople AS sp " +
"WHERE sp.HireDate > @date",
new QueryParameter("@date", hireDate));

foreach(SalesPerson p in newSalesPeople) {
Console.WriteLine("{0}\t{1}", p.FirstName, p.LastName);
}
}
By leveraging the types that were automatically generated by the code-gen tool, plus the LINQ support in ADO.NET, we can re-write the this as:

using(AdventureWorksDB aw = new
AdventureWorksDB(Settings.Default.AdventureWorks)) {
var newSalesPeople = from p in aw.SalesPeople
where p.HireDate > hireDate
select p;

foreach(SalesPerson p in newSalesPeople) {
Console.WriteLine("{0}\t{1}", p.FirstName, p.LastName);
}
}
Or, in Visual Basic syntax:

Using aw As New AdventureWorksDB(Settings.Default.AdventureWorks)
Dim newSalesPeople = From p In aw.SalesPeople _
Where p.HireDate > hireDate _
Select p

For Each p As SalesPerson In newSalesPeople
Console.WriteLine("{0} {1}", p.FirstName, p.LastName)
Next
End Using
This query written using LINQ will be processed by the compiler, which means that you'll get compile-time validation as the rest of the application code would. Syntax errors as well as errors in member names and data types will be cached by the compiler and reported at compile time instead of the usual run-time errors that are commonplace during development using SQL and a host programming language.

The results of these queries are still objects that represent ADO.NET entities, so you can manipulate and update them using the same means that are available when using Entity SQL for query formulation.

While this example is just showing a very simple query, LINQ queries can be very expressive and can include sorting, grouping, joins, projection, etc. Queries can produce "flat" results or manufacture complex shapes for the result by using regular C#/Visual Basic expressions to produce each row.

Monday, May 24, 2010

How a Smurf Attack Works



How a Smurf Attack Works


Smurf attacks are a type of denial of service attack, in which the Internet Control Message Protocol (ICMP) and broadcasts are being exploited. Normal ICMP requests (commonly referred to as pings) are used to verify network connectivity. But since they require a response from the target machine, they can maliciously be used to consume network resources if many are sent at once.

Broadcasts come into the equation, however, since they give capability to send requests to every computer on a network. Obviously if a broadcast were to be sent multiple times, the traffic would slow down the network. Imagine 100 computers sending back an ICMP request at the same time- network performance would take a huge dip.

It should be noted that smurf attacks work via an attacker spoofing the IP address of the broadcast. The IP address is actually the IP address of the victim the attacker chooses. When every computer on the network responds to the ICMP request, all of these requests go to the computer the attacker borrowed the IP address from. In this instance, the network only acts as an amplifier to the attack, not necessarily the victim.
Unfortunately, smurf attacks leave little room for victims to recover from an attack. Instead, the attack must be staved off at the network level via filtering. We can do this specifically through the no ip directed-broadcast command in Cisco routers.

No IP Directed-Broadcast
An IP Directed-Broadcast is simply an IP packet, of which has a destination address of a particular IP subnet. The broadcast in this instance is sent from a different network, as one could probably guess from the command name. (The broadcast is being directed via IP, not a unicast address.)

Keep in mind that if you are running a Cisco IOS version 12.0 or above, you do not need to follow these steps. No IP Directed-Broadcast was enabled by default after IOS 12.0. It is strongly recommended that No IP Directed-Broadcast be enabled if your IOS version is below 12.0. If you aren’t sure which version you have, simply type in the following commands from user exec mode:

As you can tell in the above example, the version number is higher than 12.0. In this instance, we would not need to take further action. If the number happens to be below 12.0, then you will need to apply the No IP Directed-Broadcast command. First, you should find out the naming convention for your router’s interfaces, as show below.


Now that we know our interface naming convention, FastEthernet 0/0, we can modify it. You may wish to write this down, since this will be what you will always refer to your interfaces to from now on. You may now proceed to apply the command to the interface, as seen below.


Note that we only applied this to a single interface (FastEthernet 0/0).It should be applied to all interfaces for maximum protection.

Closing Comments
Very few IP applications will make use of the IP directed broadcast, so it is almost always perfectly fine to leave it off. You can, however, configure access lists to permit or deny IP Directed-Broadcasts. This is usually only feasible with smaller networks, since access lists can be quite tedious to maintain on all but the smallest networks.



Monday, May 17, 2010

ASP.NET 1.1 WINDOWS SERVER 2003 IIS6.0 RUNNABLE CODES







ASP.NET 1.1 WINDOWS SERVER 2003 IIS6.0 RUNNABLE CODES
pASSWORD TEXT BOX - TextMode="password"

Monday, May 10, 2010

Enabling ASP.NET 2.0 in Vista RTM

Developing Web Applications on Windows Vista with Visual Studio 2005: Tip/Trick: Using IIS7 on Vista with VS 2005
Microsoft Windows Vista RC1 is now available to the public (view the press release), and many Visual Studio 2005 web developers are eager to start building ASP.NET 2.0 applications running under Internet Information Services 7.0, which is included with the Vista operating system. To build web apps under this environment, there are just a few steps you need to perform:



1. First, you need to install the IIS 7.0 and ASP.NET 2.0 Windows components, since they are not automatically installed by default. Because Visual Studio 2005 uses the IIS metabase APIs to create and configure applications in IIS, you must also install a metabase compatibility component for IIS 7.0. To do this, use the “Programs and Features” control panel in Vista, following the steps below.
To install IIS 7.0 and ASP.NET 2.0 on Windows Vista

1. In Windows Vista, open Control Panel and then click Programs and Features.

In the right pane, click Turn Windows features on or off.
The Windows Features dialog box opens.




2. Select the Internet Information Services check box.

3. Double-click (or expand) Web Management Tools, double-click IIS 6 Management Compatibility, and then select the IIS 6 Metabase and IIS 6 Configuration Compatibility check box.

4. Double-click (or expand) World Wide Web Services, double-click Application Development Features, and then select the ASP.NET check box.

Note The related options that are necessary for Web application development will automatically be selected.

5. Click OK to start the ASP.NET installation process.

Second, you must run Visual Studio 2005 in the context of an administrator account before you can develop web applications on Windows Vista. By default, Windows runs applications in a limited-privilege user account even when you are logged on to the computer as an administrator. To explicitly run Visual Studio as administrator, follow the steps below.



To run Visual Studio with administrative privileges in Windows Vista

1. In Windows Vista, click Start, click All Programs, and then locate Visual Studio.

2. Right-click Microsoft Visual Studio 2005, and then click Run as administrator.



One more note: If you happen to use SQL Server Express for database development, you'll also need to download and install SQL Server Express SP1, which contains updates required to run on Vista.

Enjoy developing on Vista and be sure to let us know what you think of IIS 7.0!



A few people have pinged me over the last week asking about how to use VS 2005 with an IIS 7.0 web-site on Windows Vista. Specifically, they've run into an issue where they see a dialog message asking them to install the FrontPage Server Extensions, or they get a "You must be a member of the administrators group" message when they try to connect (see dialog below):
quickly summarize you need to follow the below two steps to enable it:

1) You need to make sure that you have the the optional "IIS 6 Management Compatibility" option installed within IIS7. This installs an API for the new configuration system that is compatible with the old Metabase APIs (which is what VS 2005 uses). You can select this using the "Turn Windows Features on or Off" option in the Vista Control Panel:

2) You need to make sure you launch VS 2005 with "elevated" privledges so that you have admin privledges to connect to IIS (this is needed to debug a service, as well as create sites and/or change bindings that impact the entire machine). You can do this by right-clicking on the VS icon and select the "Run as Administrator" option when launching VS:


Note that this is needed even if your user is already in the administrators group if you have UAC enabled (which is on by default with Vista). If you disable UAC (which you can also do via the control panel), then this second step isn't required. Running VS 2005 with "elevated" privledges won't be required if you use the built-in VS 2005 Web-Server (since it has reduced privledges already). It is only required when connecting and running/debugging with IIS locally.

We'll be updating Visual Studio 2005 to have more accurate error messages to help guide you to the above steps more naturally in the future. Until then, just use the above steps and you are good to go.


In this post, I will share how I overcome this problem by manually performing extra steps to make ASP.NET works in Visual Studio 2005. Before I start, you should understand that Vista RTM installation comes with total security. That means all web development features are locked down.

The following steps will unlock the web development features so you can get back to your web development:

Open “Services”. Now you can do this very easy by typing Services in the search textbox located at the bottom of start menu panel.
Find “Windows Process Activation Service”. Change its Startup Type to Automatic, and “Start” the service.
Next, find “World Wide Publishing Service”. Notice that you can not directly start this service because it is in “disabled” state. The trick is by changing the Startup Type to Automatic first, then you can start this service.
Next, open “Command Prompt”. Again you can do this quickly by typing “cmd” in the search textbox.
Run this command “aspnet_regiis -i” inside ASP.NET 2.0 Framework folder. This will re-register ASP.NET 2.0 handlers and mapping to all existing web applications.
Finally, run this command “net start w3svc”. Your web server should be started perfectly at this time.
Try to open one of your HTTP project in Visual Studio 2005 and run one of your webform. Here you go.