Introduction to XML Web Services

Friday, December 21, 2007

XML web service can be defined as a unit of code that can be invoked via HTTP requests. Unlike a traditional web application, however, XML web services are not (necessarily) used to emit HTML back to a browser for display purposes. Rather, an XML web service often exposes the same sort of functionality found in a standard .NET code library.

Not only a Web based application but a console or windows from application can also use the functions provided by a web service.

Simplest example of a web service in our daily life can be given as a service that is used to validate credit card accounts when you are making electronic transactions. Every e-commerce web site validates the credit cards for transactions and there is no need for every such web site to have code written to do this in their own application. Instead, all e-commerce websites can access a single web service to validate the credit card.

In general, to use web services your applications have to do three things:

  1. Find appropriate Service (Discovery Service):

Before a client can invoke the functionality of a web service, it must first know of its existence and location.

For finding the appropriate service, there are different options.
If you are the individual (or company) who is building the client and XML web service, the discovery phase is quite simple given that you already know the location of the web service in question.

In other cases, an application first uses the Universal Description, Discovery and Integration (UDDI) registry to determine where a specific service can be found. UDDI is a web-based distributed directory that enables web services to list themselves on the Internet and discover each other, similar to a traditional phone book’s yellow and white pages.

  1. Determine what kind of messages web service will accept, and in what format (Descriptive Service):

Once a client knows the location of a given XML web service, the client must fully understand the functionalities provided by the service. He must know what inputs and in which format are required to use an operation of the service in question. For example, the client must know that there is a method named GetTemperature() that takes some input parameters and sends back a return value.

For this purpose XML-based metadata is used to describe a XML web service and it is termed the Web Service Description Language (WSDL).
Using this, a web service created in any language can be accessed by an application developed in any other or same language and any platform.
Using the WSDL description a client can find that what kind of operations the Service provides and what are the inputs that the operation expects and what it gives as a return value.

  1. Communications mechanism to get data to and from the web service (Transport Protocol):

Hypertext Transport Protocol (HTTP) is used as the transport mechanism to access the Web Services. The data packets transferred apply the Simple Object Access Protocol (SOAP), with the use of XML statements. The XML provides a placeholder for data, either input to the web service or output back to the application.

SOAP is a standard of the World Wide Web Consortium, and its specification can be found at It describes an XML-based message passing format for communications between networked computers and other devices.

HTTP GET and HTTP POST can also be used in place of SOAP but they are restricted to a limited set of core data XML schema types. On the other hand, SOAP can be used with complex types.

.NET web services use SOAP to transfer the data.

solution for "Validation of viewstate MAC failed" in ASP.NET 2.0

Tuesday, August 21, 2007

If you are having a large page that uses lot of time to load and you are using built-in databound controls such as GridView, DetailsView or FormView which utilize DataKeyNames then there may be the cases whenever you perform a post back before the page loading is complete you get the follwoing error:

[HttpException (0x80004005): Validation of viewstate MAC failed. If this application is hosted by a Web Farm or cluster, ensure that configuration specifies the same validationKey and validation algorithm. AutoGenerate cannot be used in a cluster.]

It is an observation that whenever GridView uses DataKeyNames it requires the ViewState to be encrypted. For this, Page adds <input type="hidden" name="__VIEWSTATEENCRYPTED" id="__VIEWSTATEENCRYPTED" value="" /> field just before closing of the <form> tag.

Now, if the page takes lot of time to load and you click on event then this hidden field might not bet yet rendered to the browser.

It means this field was omitted during postback, and Page doesn't "know" that viewstate is encrypted and thus causes the Exception.

A solution to this problem is to add follwoing ocde in the web.config :

<pages enableEventValidation="false" viewStateEncryptionMode ="Never" />

Exception Handling in SSIS Script Task

Wednesday, August 8, 2007

In SSIS Script Task you can use the same Structured Exception Handling (SEH) as in normal VB.NET or C# code.

Using this Structured Exception Handling you can catch specific errors as they occur and perform any appropriate action needed like letting the user know about what kind of error occurred, or logging the error or to perform some specific plan of action depending on the error.

In SSIS Script task you can Return Failure Status whenever an error is caught.

Here is an example of exception handling in Visual Basic.NET.

Public Sub Main()
Dim fileContent As String
Catch ex As System.IO.FileNotFoundException
Dts.TaskResult = Dts.Results.Failure
End Try
Dts.TaskResult = Dts.Results.Success
End Sub

Above code tries to read the content of the file C:\file.txt and it will fail if file does not exist and throw an exception of type System.IO.FileNotFoundException. This exception will be caught in the catch block where we are returning the TaskResult as Failure. Additionally, you can perform any other action you want in this catch block.

Debug SSIS Script Component

Tuesday, August 7, 2007

While working with your SSIS package, have not you ever tried debugging a script component transformation by putting a breakpoint in the VB code? Well, i did and found that, unfortunately, it does not work.

On the other hand we are able to debug a script task using breakpoints in the same way as we do in Visual Studio IDE. But now how we go ahead with debugging a script component?

The only options to do the same are to either use the Row Count component or a Data Viewer.

Row Count task won’t be that much useful as it simply states how many rows passes through it.

On the other hand we can utilize the Data Viewer as a much better way to debug our script component. To add a Data Viewer, select the connector arrow leaving the script component that you want to debug, right click it and select Edit (you can also simply double click on that arrow). This will open up the Data Flow Path Editor. Just click on Add to add the data viewer. On the Configure Data Viewer screen, select Grid as the type. Click the Grid tab. Now you can select all those columns that you wish to see are in the Displayed Columns list. Now just close this window.

Now if you run your package, a Data Viewer window will be displayed and it will be filled with the data just after the script component is executed. This will be the data output by the Script Component. Click the Play button to continue package execution, or simply close the window. This way you can monitor all the data rows going through the script component.

I will admit that this work around of using a data viewer for debugging can never make up for the visual studio kind of debugging, but this is all we have got. We just can hope that future versions will have better debugging for a script component also as it is for the script task currently.

State Management with ASP.NET 2.0 : Profile Feature

Wednesday, July 25, 2007

Its a common thing to have State Management in almost all the Web Applications, but use of this has always been a contentious issue. A developer has to decide whether the user data should be stored per session or should it persist across the sessions.

Using sessions states we can always very easily store the information temporarily. This typically works by assigning to each new user a unique session key that is used as an index for an in-memory data store and lasts only for the duration of the session.

What if you want to store data across the sessions? This is typically done by having a back end data store indexed by some user key. But again a question arises, what if you want to store data across sessions for anonymous users also? This is answered very well by the new Profile feature of ASP.NET 2.0.

Using this Profile feature, you can quickly build a web application that stores user information like user preferences or any other data into a database. Profile is similar to Session State but in one regard that it is persistent across the sessions. Profile feature has a strong link up with the ASP.NET membership system and this is why data for the authenticated users or clients is stored with their real identified instead of some arbitrary generated keys. For anonymous clients an identifier is generated for them and is stored as a persistent cookie, so that every time that same machine access the site the preferences or data for that client machine will be retained.

How you can use this Profile feature effectively and how to implement better state management using it is explained in this MSDN Magazine Article.

ASP.NET 2.0 : Web Deployment Projects - Website Model of Development

I always used to ask myself while creating a web project in Visual Studio 2003 with ASP.NET 1.1, that why the hell we need to install the IIS even though our aim is to just develop a web application and not to host it.

For all of those who used to think the same Microsoft came up with the Website Model of development with ASP.NET 2.0.

Now with ASP.NET 2.0 and Visual Studio 2005 instead of creating a new project inside Visual Studio, the Web site model lets you point to a directory and start writing pages and code. Not only this the built-in ASP.NET Development Server can be used to quickly test your site (even without installing IIS), which hosts ASP.NET in a local process and prevents the need to install IIS to begin developing.

This new website model enables us to develop our web application without thinking about packaging and deployment.

If your application is complete and you are ready to deploy, you have several options. The simplest choice is to copy your files to a live server and let everything be compiled on-demand (as it was in your test environment). The second option is to use the aspnet_compiler.exe utility and precompile the application into a binary release.

To know in details about these deployment techniques and more advanced concepts just go through this MSDN Magazine article by Fritz Onion

Fix Error : "Task Manager has been disabled by your administrator"

When you are trying to open the task manager by CTRL+ALT+DEL then are you getting the following dialog box saying "Task Manager has been disabled by your administrator".

Task Manager has been disabled by your administrator

There may be several reasons for this to happen.
1. You use account that was blocked via the "Local Group Policy" or "Domain Group Policy".
2. Some registry settings block you from using "Task Manager".
3. Your system has been infected by a Trojan that is blocking you to use you Task manager.

The thrird reason is the most dangerous one and the only solution for that is to update your Anti Virus Program, Scan your system and remove the Trojan.

How to fix the other issues we will discuss it here :

The best thing regarding this that i will recommend you all is to go through the following Microsoft support document and follow the procedures mentioned there: Microsoft Support

Other thing that i found out is a toll to fix the registry for the same. You can get the tool from here -> Registry Tool

Data Structures and Algorithms with Object-Oriented Design Patterns in C#

Wednesday, May 23, 2007

Let me share with you a book whose primary goal is to promote object-oriented design using C# and to illustrate the use of the emerging object-oriented design patterns.

The book deals with software design patterns like: singleton, container, enumeration, adapter and visitor and how we can use them in an Object Oriented Approach with C#.

Virtually all of the data structures are presented in the context of a single, unified, polymorphic class hierarchy. This framework clearly shows the relationships between data structures and it illustrates how polymorphism and inheritance can be used effectively. In addition, algorithmic abstraction is used extensively when presenting classes of algorithms. By using algorithmic abstraction, it is possible to describe a generic algorithm without having to worry about the details of a particular concrete realization of that algorithm.

A secondary goal of the book is to present mathematical tools just in time. Analysis techniques and proofs are presented as needed and in the proper context. In the past when the topics in this book were taught at the graduate level, an author could rely on students having the needed background in mathematics. However, because the book is targeted for second and third-year students, it is necessary to fill in the background as needed. To the extent possible without compromising correctness, the presentation fosters intuitive understanding of the concepts rather than mathematical rigor.

This book presents the various data structures and algorithms as complete C# program fragments. All the program fragments presented in this book have been extracted automatically from the source code files of working and tested programs. It has been tested that by developing the proper abstractions, it is possible to present the concepts as fully functional programs without resorting to pseudo-code or to hand-waving.

This book does not teach the basics of programming. It is assumed that you have taken an introductory course in programming and that you have learned how to write a program in C#. That is, you have learned the rules of C# syntax and you have learned how to put together C# statements in order to solve rudimentary programming problems.

View/Download This Book

Validation Controls in ASP.NET

Monday, May 21, 2007

You might have used Validation controls in ASP.NET. There are two noteworthy enhancements to BaseValidator class from which all validation controls derive from.

1) A new property SetFocusOnError is now available, which when set to True, will automatically generate necessary JS script to set the focus to the control being validated if there is a validation failure.

2) Another property named ValidationGroup has been added. By setting a common value to a set of validation controls and to a Submit button in your Form, you can selectively fire validation events for some controls. In ASP.NET 1.x this kind of granular control over the validation process is not feasible. If you have a set of validation controls, all events would fire when the Form is attempted for submission. This new feature can be leveraged when you have two logical sets of UI elements to be validated and the user can initiate two distinct operations using two different buttons in the screen.

IButtonControl Interface in ASP.NET 2.0

Friday, May 11, 2007

ASP.NET 2.0 has introduced a new interface named IButtonControl under System.Web.UI.WebControls namespace. The properties and methods of this interface can be implemented to make a control behave like a button in a web form. One of the important properties is PostBackUrl. This can be used to post the current page to a different page, in other words, doing cross-page posting.

On a related note, view state related to cross-page posting is stored in a hidden variable named __PREVIOUSPAGE. The target page can refer to the posting page by using Page.PreviousPage property which is also new in ASP.NET 2.0.

To know more about the interface, please read the MSDN library link.

ASP.NET 2.0 Provider Model

Tuesday, May 8, 2007

ASP.NET 2.0 introduces a new Provider model which allows developers to implement a requirement differently without changing a common interface. There three aspects to it - the provider class, the configuration layer and a data store. The Provider class implements the functionality, the configuration layer lets you configure which provider to use irrespective of the data store. The data store could be Active Directory, SQL Server, Oracle, etc. Let me explain with an example.

Consider authenticating users using a Membership provider. If you want to use “as is”, you need to set up a SQL Server database, make few configuration entries in Web.config and have a login control in your ASP.NET page. No explicit coding is required to authenticate users. On other hand, if you want to have your custom authentication method, you can create your own Membership Provider by extending the MembershipProvider class and overriding the ValidateUser method and other “must override” members. While doing so, you do not have to change the code tightly coupled with the UI (the code that invokes the provider).

To learn more, read this MSDN article.

Drag and Drop in Javascript

Monday, May 7, 2007

Implementing a Drag and Drop functionality to your web page sounds really cool, but it is not simple and you will agree on that.
Here we will see how can we implement drag and drop in out web site using Javascript 1.2 and layers.

Check out this ebook from to see how can we implement this.

To download the file click on the image below

Note : This ebook is under the Creative Commons Attribution License. This ebook is free and legal.

How To Start a Website From Scratch

There must be many of you who want to create a website of your own, but you are not sure how to proceed for that.

Well Here is a tutorial from that will teach you how to create a website from scratch.

To download the file click on the image below

Note : This ebook is under the Creative Commons Attribution License. This ebook is free and legal.

Threading in .NET 2.0

Thursday, May 3, 2007

If you have used threading in .NET framework 1.x, you may be familiar with Suspend and Resume methods. Please note that these two methods have been deprecated in .NET Framework 2.0. As a replacement, use one of the following thread synchronization methods based on the scenario.

1. Use the Interlocked class and associated Add, Increment, Decrement methods if the operation that you want to perform is a very simple mathematical operation.
2. Use lock object to encapsulate the critical section (the lines of code that need to run as one atomic operation).
3. Use Monitor class for more granular control. This class exposes methods such as TryEnter and Wait in order to obtain an exclusive lock.
4. Use ReaderWriterLock class in scenarios where multiple threads need to read from a common source and only one thread needs to write to the common source at any given point of time.
5. For thread synchronization across AppDomains or processes, you can use kernel-level objects exposed through .NET classes such as Mutex, Semaphore and EventWaitHandle.

Authorization in ASP.NET applications

Friday, April 13, 2007

In general, two approaches to Authorization are possible for ASP.NET applications.

First is role based, users are grouped in application-defined roles. Members of a particular role share same privileges within the application. Access to operations is authorized based on the role-membership of the caller. Resources are accessed using fixed identities (such as Web application’s or Web service’s process identity).

The second approach is resource based. Individual resources are secured using Windows Access Control Lists (ACL). The ACL determines which users are allowed to access a particular resource and also the types of operations the user can perform with that resource. In this case, resources are accessed using original caller’s identity.

.NET Web applications typically use one of the following two security models for resource access.

a) Trusted subsystem model
b) Impersonation/delegation model

Under the Trusted subsystem model, the middle tier service uses a fixed identity to access downstream services and resources. The security context of the original caller does not flow through the service at the OS level, although the application may choose to flow the original caller’s identity at the application level. (Why? It may need to do so in order to support back end auditing requirements or to support per-user data access and authorization). The downstream service “trusts” the upstream service to authorize callers.

Under the Impersonation/delegation model, a service or component (usually somewhere within the logical business services layer) impersonates the client’s identity (using OS-level impersonation) before it accesses a downstream service. If the next service in line is in the same computer, impersonation is sufficient. Delegation is required if the downstream service is located on a remote computer. As a result of delegation, the security context used for the downstream resource access is that of the client.

Scale your application to handle more users

Thursday, April 12, 2007

There are two common ways to scale your application to handle more number of users, more data volume or a combination of both.
Scaling up is one of the two methods under which you increase the capability of a server by adding more hardware, such as more memory, more processor power, more network ports, etc. It does not add additional maintenance and support costs. Beyond a certain threshold, adding more hardware to existing servers may not produce desired results. For an application to scale up, the underlying framework, runtime and computer architecture must scale up as well. When scaling up, consider which resources the application is bound by. For instance, if your application uses sizeable chunk of available memory in the server, increasing memory might help. On the contrary, if your application, on an average uses only 20-30% of the CPU, adding more processor power would not help in scaling up because the problem is elsewhere.

The other way to scale an application is scaling out. Under this option, you would add more servers and use load balancing or clustering solutions. Unlike the scaling up option discussed above, adding more servers can also help in improving availability since there are alternate servers that can be used for running the application in case one server fails. However, adding more servers and implementing load balancing and/or clustering solutions may involve additional maintenance and support costs.

Grid View control in ASP.NET 2.0

Tuesday, April 10, 2007

ASP.Net 2.0 replaces the good old classic DataGrid control with the GridView control. Do you remember what all steps you used to take care of to allow the pagination in the data grid? ASP.NET 2.0 makes it simpler with the Grid View control.
This control is much like the DataGrid server control, but the GridView server control (besides offering many other new features) contains the built-in capability to apply paging, sorting, and editing of data with relatively little work on your part.
Following is a code snippet using the GridView control. This code builds a table from the Customers table in the Northwind with paging features.

<%@ Page Language=”VB” %>
<%@ Page Language=”VB” %>
<head runat=”server”>
<title>GridView Demo</title>
<form runat=”server”>
<asp:GridView ID=”GridView1” Runat=”server” AllowPaging=”True”
DataSourceId=”Sqldatasource1” />
<asp:SqlDataSource ID=”SqlDataSource1” Runat=”server”
SelectCommand=”Select * From Customers”
pwd=password;database=Northwind” />

That’s all you have to do! Paging is enabled by setting the server control attribute AllowPaging of the GridView control:
<asp:GridView ID=”GridView1” Runat=”server” AllowPaging=”True”
DataSourceId=”SqlDataSource1” />

If you might have noticed all the processing of the above code is not included in runat=”server”. This means
We don’t need to write any server-side code to make this all work! Only some server controls needs to be included:
one control to get the data and one control to display the data.

Visit this msdn link for much more about GridView control

Ensuring application is Secure by design

Monday, April 9, 2007

You can follow the guidelines listed below for ensuring that your application is Secure By Design.

· When you application stores or transmits data that attackers want, use Cryptography. You can implement encryption yourself or require your end users to use platform encryption features such as Encrypting File System (EFS), Secure Sockets Layer (SSL) or IP Security (IPSec).

o Sample scenarios are when storing or transmitting personal information about people, financial information or authentication credentials.

· Use Authentication and authorization mechanisms built into the .NET framework

· Use Standard network protocols for network communications when possible. This is to improve compatibility with firewalls, since typically firewalls are configured to analyze network traffic and drop all packets that are not specifically allowed.

· Implement the principle of least privilege. Design and implement your applications so that they use the least privileges necessary to carry out any action. A simple example is when connecting to a database for data access. Instead of connecting with credential with very high permissions, determine what permissions your application requires, create a role that has permissions and use a credential appropriate to that specific role.

· Follow known techniques for reducing the attack surface. In other words, minimize the entry points to your application.

SQL Injection attacks

Some of you might have heard about SQL Injection attacks. SQL Injection attacks insert database commands into user input to modify commands sent from an application to a back-end database. Applications that employ user input in SQL queries can be vulnerable to SQL injection attacks.

Consider the following simplified C# source code, intended to determine whether an order number (stored in the variable Id and provided by the user) has shipped:

sqlString = “SELECT HasShipped FROM Orders WHERE OrderId = ‘“ + Id + “’”;
SqlCommand cmd = new SqlCommand(sqlString, Sql);
if ((int) cmd.ExecuteScalar() != 0) {
Status = “Yes”;
Status = “No”;

Legitimate users will submit an Order Id such as “123” and the code set the Status variable to Yes or No if the HasShipped value in the row with that ID number is true. However, a malicious attacker could submit a value such as “1234’ drop table customers—“. The preceding C# code would then construct the SQL query as

SELECT HasShipped FROM Orders WHERE OrderId = ‘1234’

drop table customers

Assuming the table named customers exist and the application had the right to drop tables, the table would be lost.

Partitioning in databases

Saturday, April 7, 2007

When databases grow large, it is ideal to identify tables containing high volume of data and split them into multiple smaller sets of tables. This approach is called partitioning. Performance and manageability are the primary benefits.

Partitioning can be done in one of the two ways, viz, horizontal and vertical.

Horizontal partitioning involves creating logical groups of data within a table based on one or more columns. Values stored in a horizontal partitioning column help create mutually exclusive sets of data. An example is a table containing order data based on region. Orders pertaining to a region could form one set. Likewise, orders pertaining to a different region could form a different set.

Vertical partitioning, on the other hand, is the process of splitting columns into multiple tables. For example, if a table contains 50 columns, some of the 50 columns could be in one partitioned table, the remainder of the columns could be in another partitioned table.

Typically, in OLAP databases, it is common to adopt horizontal partitioning to store table data in multiple tables. Both SQL Server 2000 and SQL Server 2005 support partitioning. However, the way SQL Server internally implements partitioning has been greatly enhanced in SQL 2005.

An excellent article on SQL Server table partitioning is available here.

Debug Stored Procedure in .NET managed code

Wednesday, April 4, 2007

Did you know that it is possible to debug a stored procedure by tracing down the execution steps from managed code? If your answer is no, here is what you need.

  1. Open a Windows application or ASP.NET project where you are invoking a stored procedure.
  2. Set a breakpoint in the step preceding the SQL call.
  3. Open Server explorer, drill down to the stored procedure that you want to debug, open it and set a breakpoint in the first executable statement.
  4. Go to Project’s property pages (by right-clicking on the project name in Solution Explorer), navigate to Start Options category and select SQL Server debugger.
  5. Run the project in debug mode.
  6. When the breakpoint in managed code is reached, step through (F11) the subsequent steps. The IDE would automatically lead you to the stored procedure breakpoint.

If you find any difficulty in doing the above steps, please stop by my desk for a short demo. You can also refer to the Microsoft Support article mentioned below.

Note: The login that you are using needs to be in sysadmin role in SQL Server.

Performance features in .NET framework 2.0

Performance considerations have a major impact on user acceptance of an application. In view of the same, measurement of performance is critical to any application. If and when there is a performance problem reported by an end user, a process needs to be followed to diagnose and troubleshoot the problem. Though this process differs from company to company and from developer to developer, troubleshooting a performance problem must involve a step for measuring current performance. .NET framework provides a set of classes, such as PerformanceCounterCategory, PerformanceCounter, CounterCreationData, in System.Diagnostics namespace to facilitate the measurement process. The purpose of PerformanceCounterCategory class is to manage and manipulate PerformanceCounter objects and their categories. The PerformanceCounter class represents a metric that you want to capture from your application.

Downloading files with ASP.NET using Save As Dialog

Thursday, March 29, 2007

This article explains how to download a file that is stored in the database with the possibility of forcing the browser to open a “Save As” dialog box to save the file on to the client system.

The content of the file is stored in a column of a table of image data type.

Such a dialog-box for saving a file can be displayed by using HttpContext.Current.Response property.

HttpContext is a class that encapsulates all HTTP-specific information about an individual HTTP request. The property HttpContext.Current gets the HttpContext object for the current HTTP request.

Here is an example:

1. HttpContext.Current.Response.Clear();

2. HttpContext.Current.Response.ContentType = "file";

3. HttpContext.Current.Response.AddHeader("content-disposition", "attachment;filename=" + fileName);

4. // Remove the charset from the Content-Type header.

5. HttpContext.Current.Response.Charset = "";

6. byte [] buffer = (byte[])(dsFile.Tables[0].Rows[0]["FILE_CONTENT"]);

7. HttpContext.Current.Response.BinaryWrite(buffer);

8. // End the response.

9. HttpContext.Current.Response.End();

Line 1 clears all content output from the buffer stream.

Line 2 sets the HTTP MIME type for the output stream. The default value is "text/html". As we want to store it as a file to the client system we specify it as “file”.
The AddHeader() method in Line 3 is used to add an HTTP header to the output stream. It accepts two parameters. First parameter is the name of the HTTP header to add value to and second is the value itself. In our case the parameter name is “content-disposition” and value is sent as "attachment" along with the file name. This is what that causes the opening of save file dialog. If we provide value of content-disposition as “inline” instead of save file dialog the file will be opened in the associated application.

In the Line 7 we are actually writing the contents of the file, as binary characters, to the HTTP output stream. Method BinaryWrite() is used for this purpose. This method takes a parameter buffer which is a byte array containing bytes to be written to the HTTP output stream. In our case the content of this buffer are fetched from the database into a dataset dsFile with column “FILE_CONTENT” corresponding to the bytes to be written.

The End() method in Line 9, sends all currently buffered output to the client, stops execution of the page, and raises the Application_EndRequest event. This is when we see a message asking us to save or open the file, displaying the information about the file, like file type, file name.

On clicking save, the save file dialog is opened where we can save the file to our system with the name that we have provided in the line 3 or we can provide a new name also.

How To Create a Proxy Site

So do you want to create a proxy website for yourself. Proxy websites are easy to build and they can be up and running within 2 hours or so. They are really a great source for generating revenue from your site through advertisements and referrals. Proxies can be used to access the blocked sites in your office network, school, college etc.

So what does it take to create a proxy website of your own, so that you can make huge revenue out of it. Dave Turnbull has created a short, simple walk through that takes you through the basics on how to create a proxy site:

Making a proxy is easy. Upload some files, change some graphics, and slap up some ads, and you’re done. But making a successful proxy is a whole different ball game, and what this series of articles aims to help you with. I’ve made a few proxies myself, and for a very small amount of work, I’ve made some decent revenue.

If you want to read the entire article, click here.

Output Paramaters in OLE DB Command in SSIS

Monday, March 26, 2007

Today when i was nearly at the verge of finishing my SSIS package i just found out that i really can not complete it at the moment. I am having lot of validations on the input data, lot of look ups to get the reference values and after that i need to insert the rows into a SQL server database. It is not simple insertion, there is a big logic behind inserting rows, checking if the rows containing individual numbers form a sequence so that those can be inserted into a range rather than individual entries. Not only that i need to check if that number is already in the table in some range, if yes i need to discard it, redirect that row to an error output file.

So ultimately, i have to use an OLE DB command component, write my SQL in it to call the stored procedure handling the logic to insert in the database. I was able to map the input columns of the stored procedure with my source input columns without any difficulty. But the problem was how to know which row already existed and how to redirect that row to error output with proper custom message. Well i just did it by hit and trial by having two output parameters in my stored procedure, added two columns to my input and mapped them with the out parameters in the OLE DB command. Then the task was simple, just to have a conditional split after the OLE DB command, check if indicator (one of the OUT parameters) is TRUE, suggesting insertion failed and then copying those rows to an error file that i am maintaining with the other out parameter @Message.

If you want to log general SQL errors that may occur during the execution of the OLE DB command component you can simply configure its error outputs and log the ErrorMessage.

However i am still not able to figure out how can we use the return parameters in a stored procedure to map them using an OLE DB command.

How to prevent Cross Site Scripting Attacks

Saturday, March 24, 2007

One of the common types of security attacks on web-based systems (both intranet and internet) is cross-site scripting. It is a technique that allows hackers to perform one of the following things.
  1. Execute malicious script in a client’s web browser.
  2. Insert script, object, applet, form and embed tags.
  3. Steal web session information and authentication cookies.
  4. Access the client computer.

Scenario - Any web page that allows user to enter data in fields is susceptible.

How to defend against cross-site scripting attacks?
  1. Validate user input. Do not trust any input as valid unless proven otherwise.
  2. Do not echo back data entered by a user unless you have validated it.
  3. Do not store secret information in cookies. Secret information includes any and all data item that uniquely identifies a person, credit card number, etc. If you had to store secret information in a session cookie, encrypt the cookie.
  4. Use HttpOnly cookie option.
  5. Use the security attribute.
  6. Take advantage of ASP.NET features, such as ValidateRequest Page attribute.
  7. Use HtmlEncode and UrlEncode where appropriate.

Using Temp Tables in SSIS Package Development

Friday, March 23, 2007

Often while working in a SSIS package you will require to temporary hold your data in a staging table in one of the Data Flow Tasks and then in another task you will require to fetch data from the staging table, perform transformations and load it and delete the staging table.

It means you create a physical table in your production database to stage data. But in a production environment, you may not want to create and destroy objects in the production database and might prefer to use temp tables instead. This seems easy, in fact it is, but it requires a trick and to modify the default properties of the components. Let us see what to do in this regard.

In the figure you have two Execute SQL tasks. The Create Temp Table task executes a SQL command to create a temporary table named #tmpMyData. The Drop Temp Table task executes a SQL command to drop table #tmpMyData.

If you execute this package, you will notice that the drop portion of the package failed. The package progress tab will report the error message that the table doesn't exist. This is because both of these Execute SQL tasks do not share the same connection, rather they just share the same connection manager. Each task builds its own connection from the connection manager. So when the first task is finished, temp table is destroyed and the second task creates a new connection.

To fix this in the regular property window of the OLE DB connection there is a property RetainSameConnection that is set to "FALSE" as a default. Changing it to "TRUE" is our trick and will solve the problem.

By changing this property to "TRUE," both Execute SQL tasks will share the same connection and both will be able to use the temp table.

You can use this trick for performance in SSIS packages also in the case you are going to be performing a task requiring a connection within a loop. Otherwise, imagine how many openings and closings are going to occur during that loop.

Three Dimensions to Protect your Computer

Thursday, March 22, 2007

First - Strengthen the defense of your computer

- Install Firewalls
"Firewall" is an isolation technology to separate the internal network and the Internet. The firewall carries out some filtering when two networks communicate. It lets the data/person that you "agree" to enter your network, and also block the data/person you "do not agree" from your network. It can prevent they changes, copy, or destroys your material. To ensure the firewall get into work, you must keep it update.

- Install Anti-virus software
The key on computer virus is not "Kill" is "Prevent". You should install the Anti-virus software and start the real-time monitoring process and keep the software and the virus definition file updated. To guard against the newest virus, you should set the update process in a daily mode. Also, in every week, you should scan the computer completely for the virus.

- Guard against Spyware
Spyware is a program that is installed without the user authorization. It can get the information and send to a third party. Spyware can attached in software, executable image and break into the user computer. They are used to track the computer usage information, record the keyboard hits, or take a screen capture. To get rid from spyware, you can
- raise the security level of your browser
- install software to guard against from spyware
- verify with the official website about the software plan to install

Second - Against from attacks

- Refuse unknown software, emails and attachments
Don't download unknown software. Save all downloaded softwares into one single directory and scan it before install. Don't open any unknown email and its attachments. Many viruses are spread through by using email. Don't open unknown emails, especially those with interesting headline.

- Don't go to hacker and pornographic website
Many virus and spyware are come from these websites. If you browse this website and your computer is not secure enough, you can imagine what will happen next.

- Avoid share folders
Share folder is risky and outsider can surf around your folder freely. When you want to share folder, remember to set a password. If you are no need to share the folder any more, remove the sharing immediately. It is extremely danger to share the whole drive. If someone removes the system file, your machine may be down and cannot start up again.

Last - Keep Checking/Update

- Set different and complicate password
In Internet, there are thousand needs to use password, like e-banking, login account, email. Try to use different password for different operation, this can limit the loss if one of the passwords is broken into by someone. Avoid using meaningful password, like birthday, telephone number. You should use password with letter and number. One more thing is do not choose "Save Password" option.

- Beware of defraud
The number of defraud case in Internet is keep increasing. Build up a fake bank website, send out an email to ask for password. Before take any action, try to verify it is real or not. You can phone to bank hotline to ask, go to the bank to contact directly.

- Backup
Backup is the last step to guard against the attacks. If your computer is hacked, the operating system and softwares can be reinstalled. But the data can only be restored if you frequently make a backup.

Canonicalization : Security Attack

One of the common types of security attacks is due to canonicalization. A canonicalization error is an application vulnerability that occurs when an application parses a filename before the operating system has canonicalized it. Operating systems canonicalize filenames when processing a file to identify the absolute, physical path of the given file given a virtual or relative path.
Files can be accessed using multiple names. An example is given below. If your application uses one of the methods to validate whether the user has access to the file, an attacker could potentially use one of the other synonymous names.


How to minimize canonicalization errors:
  1. Validate user input to ensure that the entered file name is not a restricted file. Use regular expressions to look for specific file name embedded within the user input string.
  2. Canonicalize the file name before validation. This is the process of deriving to most simple form of the file. This is a more secure option, because the .NET framework will provide the application with the absolute name of the file. To do this, you can use System.IO.Path.GetFullPath.

New Features in Visual Basic 9.0

Wednesday, March 21, 2007

New features have been added to the language. Few of them can be listed as:

  1. Implicitly Typed Local Variables

This feature allows us to declare variables without specifying their data type. The datatypes are assigned to them based on the value assigned to these variables on the right hand side.

  1. Object and Array Initializers

The new object initializers in VB 9.0 are an expression-based form of “With” for creating complex object instances concisely. For example, we already know that the With statement simplifies access to multiple members of an object by using a member-access expression starting with a period which is evaluated as if it were preceded by the object name itself, as in

Dim MyCounty As New Country()

With MyCounty

.Name = "My County"

.Area = 555

.Population = 15432

End With

Using new object initializers above 2 statements can be clubbed together as

Dim MyCounty = New Country With { .Name = "My County ", _

.Area = 555, _

.Population = 15432 _


  1. Anonymous Types

VB 9.0 enables us having variables without the need to declare/define the type. All you need to be able to do is create something that looks like it and access the public fields/properties.

Some more information about anonymous types: here.

  1. Deep XML Support

LINQ to XML is a new, in-memory XML programming API designed specifically to leverage the latest .NET Framework capabilities such as the Language-Integrated Query framework. Just as query comprehensions add familiar, convenient syntax over the underlying standard .NET Framework query operators, Visual Basic 9.0 provides deep support for LINQ to XML through XML literals and XML properties.

For detailed explanation check the msdn link at the bottom of this post.

  1. Query Comprehensions

SQL like queries can now be used to perform SQL like operations using operators like Select, Order By, Where etc to get the desired data/result form collections. For this purpose query expression is used which is somewhat similar to SQL syntax, but due to some clashes with VB syntax some differences exist which should be learned.

  1. Extension Methods and Lambda Expressions

Much of the underlying power of the .NET Framework standard query infrastructure comes from extension methods and lambda expressions.

Extension methods are shared methods marked with custom attributes that allow them to be invoked with instance-method syntax. Most extension methods have similar signatures. The first argument is the instance against which method is applied, and the second argument is the predicate to apply.

Some more details about lambda expressions can be gathered form here.

  1. Nullable Types

The nullable values from the relational databases used to be inconsistent with the data types in .NET. Now we can declare different data types as nullable to overcome this inconsistency.

  1. Relaxed Delegates

In Visual Basic 9.0, binding to delegates is relaxed to be consistent with method invocation. That is, if it is possible to invoke a function or subroutine with actual arguments that exactly match the formal-parameter and return types of a delegate, we can bind that function or subroutine to the delegate. In other words, delegate binding and definition will follow the same overload-resolution logic that method invocation follows.

To have a detailed look at each of the features mentioned above, read more.

Sending Mails in .NET framework 2.0 : new namespace System.Net.Mail

Tuesday, March 13, 2007

If you have used System.Web.Mail namespace in .NET 1.x for sending emails programmatically, expect a surprise. All classes within this namespace have been deprecated in favor of the new System.Net.Mail namespace. System.Net.Mail contains classes such as MailMessage, Attachment, MailAddress, SmtpClient, etc to help us send emails in the 2.0 world. The features provided by this namespace, in a nutshell, are given below.

  • MailMessage is the main class that represents an email message.
  • Use MailAddress class to represent the sender and each recipient.
  • Use SmtpClient class to connect to the SMTP server and send the email, both synchronously and asynchronously.
  • Use AlternateView class to create the email content in alternate formats, say one in HTML and other plain text, to support different recipient types.
  • Use LinkedResource class to associate an image with the email content.
  • SmtpPermission class and SmtpPermissionAttribute can be used for code access security.

To know more about some of these classes, read this article.

Code Analysis in Visual Studio 2005 Team Suite

Friday, March 9, 2007

If you are using VS.NET 2005 Team Suite, code analysis is built into the IDE itself. In older version of VS.NET, you might have used FxCop externally for comparing against pre-defined rules or would have integrated with the IDE by adding FxCop as an Add-In.

To enable code analysis, open project properties, navigate to Code Analysis tab, select “Enable Code Analysis” and choose the different rules or categories that you want to run. When you do so, code that does not conform to these rules would be reported during build as warnings. Based on project needs, you can customize some of them to report Errors instead of warnings. This level of granular control helps in strict conformance to rules. For example, any violation of a design rule can be considered as a bad practice and hence should be configured to throw an Error rather than just a warning.

Microsoft ends JPEG ...Going to HD Format

March 08, 2007 (IDG News Service) -- Microsoft Corp. will soon submit to an international standards organization a new photo format that offers higher-quality images with better compression, the company said today.

The format, HD Photo -- recently renamed from Windows Media Photo -- is taking aim at the JPEG format, a 15-year-old technology widely used in digital cameras and image applications.

Both formats take images and use compression to make the file sizes smaller so more photos can fit on a memory card. During compression, however, the quality of the photo tends to degrade.

Microsoft said HD Photo's lightweight algorithm causes less damage to photos during compression, with higher-quality images that are half the size of a JPEG.

Read More

Using View State in Server controls

Wednesday, March 7, 2007

Using View State in Server controls

View State is serialized and deserialized on the server. To reduce CPU cycles, reduce the amount of view state your application uses. Disable view state if you don’t need it. Disable view state if you are doing at least one of the following:
· Displaying a read-only page where there is no user input.
· Displaying a page that does not post back to the server.
· Rebuilding server controls on each post back without checking the postback data.

As As the view state grows larger, it affects performance in the following ways.
· Increased CPU cycles are needed to serialize and deserialize the view state content.
· Pages take longer to download because they are larger.
· Very large view state can impact the efficiency of garbage collection.

Alpha Geek: Copy TV shows to your iPod

Tuesday, March 6, 2007

So Apple wants you to pony up $1.99 per episode of Heroes when you're already paying the cable company for it? Nuh-uh. Don't think so. Seems like you should be able to copy that show--or any other--from your media center PC or TiVo right to your iPod. You can, and it's easier than you might think. (Easier, even, than copying DVDs.)

read more | digg story

Common Table Expressions in SQL Server 2005 (CTE)

Common Table Expressions, CTE in short, is a new feature in SQL Server 2005. CTE is a temporary result set and is defined as part of a SELECT, INSERT, UPDATE, DELETE and CREATE VIEW statements. A very simple usage of a CTE is given below.

WITH MyCTE( ListPrice, SellPrice) AS
SELECT ListPrice, ListPrice * .95 FROM Production.Product
* FROM MyCTE WHERE SellPrice > 100

A CTE definition requires three things, viz, a name for the CTE (MyCTE in the above example), an optional list of columns (ListPrice and SellPrice) and the query following the AS keyword.

Using CTE could improve readability when used in complex queries involving several tables. It would be a good replacement in cases where you are using a temporary table just once after creation. The advantages of using CTE are given below.

  • Create a recursive query.
  • Substitute for a view when the general use of a view is not required; that is, you do not have to store the definition in metadata.
  • Enable grouping by a column that is derived from a scalar subselect, or a function that is either not deterministic or has external access.
  • Reference the resulting table multiple times in the same statement.

To learn more about the capabilities and limitations of CTE, visit the MSDN article MSDN site.

Partial Classes in .NET framework 2.0

.NET Framework 2.0 introduces the concept of Partial classes. Partial classes allow you to split a class definition across multiple source files. Separation of class definition facilitates multiple programmers to work on the same class simultaneously and/or better organization of code within a class. VS.NET 2005 uses this concept to hide designer-generated code when you create Windows Forms.

To create a partial class, add the “partial” keyword to the class definition. To learn more about partial classes, read the MSDN library article mentioned below or visit this link. Though the links point to C#, partial classes are available in Visual Basic also.

Isolated Storage in .NET framework

Monday, March 5, 2007

Isolated storage is a private file system managed by the .NET Framework. Like the standard file system, you can use familiar techniques (such as StreamReader and StreamWriter) to read and write files. However, isolated storage requires your code to use lesser privileges, making it useful for implementing least privilege. Additionally, isolated storage is private, and isolated by user, domain, and assembly.

When to use isolated storage:

Isolated storage is not always the best solution for storing persistent data. Isolated storage should not be used to store configuration and deployment settings, which administrators control. It is a good way to store user preferences, however - because administrators do not control them, they are not considered to be configuration settings.

If you require high encryption for your data, you can use isolated storage; but don’t rely on its security. Encrypt the data before writing it to isolated storage. Isolated storage should not be used to store high-value secrets, such as unencrypted keys or passwords, because isolated storage is not protected from highly trusted code, unmanaged code, or trusted users of the computer.

You can look up the System.IO.IsolatedStorage namespace to learn more about how you can leverage the feature in your .NET applications.

Keywords Planning : Plan your content wih keywords

Sunday, March 4, 2007

While optimizing your website with good keywords is an important part of your search engine strategy, I do think however, that too many webmasters spend way too much time tweaking it to death. I don’t think that this is a good idea nor do I think that it is beneficial to their website. All that time spent on one thing while neglecting the rest of their marketing strategies in the long run is hurting their online business. So much time is wasted getting those keywords just right actually hurts the quality of the content on their web pages.

I know that most of us are taught to find a main keyword and build your website around that keyword. I know that’s what I did in the beginning. I think that’s a big mistake because it takes away from the quality of the content on the website because too much focus is put on the keyword trying to fit it in to get that web page optimized for the search engines. I’ve seen so many websites where you can tell that the website was built around specific keywords because so much of the content is really hard too understand and that it doesn’t make much sense at all. You can actually pick out the keywords because they are used so many times. While you need to use your keywords throughout your content, you don’t need to overkill them. Using your keywords too often will actually hurt you with the search engines more than it will help you.

There’s a better way to optimize your website without hurting your content. I have found that the best way to optimize a web page is to use just one keyword that is super targeted to the content on that web page. Use the best keyword that you can possibly find and put it aside for the moment. Using a good word processor go and write the content for your web pages. Forget about using your keyword or writing any html tags altogether until you have finished writing the content. When you have finished writing your content, read through it to make sure that it makes sense. Also check to see if your content doesn’t already have a keyword that you may have already written unintentionally that may be better than the one that you have already chosen. I have found many great keywords by going through this process. It is advantageous to check to see if there are any hidden gems sitting there in the already written content.

If there is not a better keyword within your content, you can now go back and start inserting your keyword in the body your web page. The main objective when doing this is that your keyword blends in with the content in a way that it makes sense. You will probably have to make some changes so that it does blend in, and that it does makes sense. Of course you will need to put your keyword in the normal places. It should appear in the title, description Meta tag, keyword Meta tag, heading, and the body. The body is where the keyword is most abused by webmasters. While the keyword needs to appear throughout the body, it doesn’t need to be there hundreds of times. Using your keyword two to three percent of the time within the body is more than sufficient. Repeat the above process for all of your web pages. You will find that you will have a well optimized website that makes perfect sense. The last and probably the most important thing that you need to do when you are finished is to forget about doing any more optimizing. It takes time to see if your chosen keyword will be of benefit or not. If you use a super targeted keyword it will be. Forget about it and go focus on the things that you need to get done with your online business that you may have been neglecting.

About The Author
Brian Queenan is the owner of Learn what it really takes to market online.

Encrypt your web browsing session (with an SSH SOCKS proxy)

Saturday, February 24, 2007

Using a simple SSH command, you can encrypt all your web browsing traffic and redirect it through a trusted computer when you're on someone else's network. Today we'll set up a local proxy server that encrypts your online activity from your Mac, PC or Linux desktop.Check it here -->

read more | digg story

System.String Or StringBuilder

Thursday, February 8, 2007

Many of us know that we should use StringBuilder object instead of String while building strings when the content of the string variable is unknown at the time of coding. Some of us may not know the real reason behind this guidance other than the high-level fact that StringBuilder is more optimized. Here is the fact, if you are interested.

The System.String class is immutable which means that the value cannot be changed once assigned. Every time you assign a value to a string variable, .NET allocates a new memory location and stores the value. Consider the following snippet of code. The .NET Framework would allocate memory for the variable four times. At the end of the fourth assignment, only the fourth allocation will have the reference. The previous three would be discarded and collected later by garbage collector. If the number of allocations is more, so will be the number of times an allocation is discarded. StringBuilder, on the other hand, is mutable and does not follow this approach to allocating space for strings.


string s;

s ="This ";

s += "is the first ";

s += "sentence in the line. ";

s += "The sentence was formed using multiple assignment statements.";



Themes and Skins in Visual Studio 2005 ASP.NET 2.0

Tuesday, January 30, 2007

ASP.NET 2.0 provides rich support for Themes which help in defining consistent look and feel across multiple pages in a web application. One of the file types that you could create within the definition of a theme is a “skin” file. After creating a theme folder in the web application, you can create a skin file by adding a new item and selecting the type as “Skin”. You can add definitions of commonly used server controls in the skin file. When the theme is attached to a web page, all controls declared within the page will inherit the formatting you specified in the skin file.

Look at the sample code snippet below. I have declared these “templates” within a skin file under my theme named “TestTheme” and I have specified this theme in the Page directive of each web page in the application.

---------------------- In the skin file --------------------------------------

<asp:Button runat="server" Font-Bold="true" />

<asp:TextBox runat="server" BackColor="Aquamarine"/>

---------------------- In the ASP.NET page --------------------------------------

<%@ Page Language="VB" AutoEventWireup="false" CodeFile="ThemeTester.aspx.vb" Inherits="ThemeTester" EnableTheming="true" Theme="TestTheme" EnableViewState="true" Trace="false" TraceMode="SortByTime"%>

At runtime, the definitions of ASP Button (with font-bold as true) would be applied to all ASP buttons in the page. This makes the ASPX coding lot simpler.



2009 ·Techy Freak by TNB