c++ लेबलों वाले संदेश दिखाए जा रहे हैं. सभी संदेश दिखाएं
c++ लेबलों वाले संदेश दिखाए जा रहे हैं. सभी संदेश दिखाएं

बुधवार, 2 नवंबर 2011

Opening with McAfee

Basic Function and Scope of the Position

We are seeking smart motivated programmers who can conceptualize, build, ship and support
technically challenging products. Passion for technology is a must and we expect this individual to
have familiarity with variety of languages, platforms, frameworks.
Specific Responsibilities/Functions
Hands-on development of exciting new enterprise/consumer software solutions
Work closely with development leaders and managers to plan releases
Participate in the definition and design of new features and products
Kernel mode programming including (but not limited to) writing kernel drivers, debugging,
Etc

Experience, Knowledge and Skills

Has a solid software engineering background with strong experience in systems level
programming
Understanding of Design Patterns, Development Processes and Best Practices
Inspired and Inspiring Technologist
The ideal candidate has hands-on experience with some or all of the following: Windows
services and OS hooks on both Windows & Unices, Windows Management Interface (WMI)
and system APIs for 32-/64-bit OS, Windows networking and device drivers, Windows user
interface programming (MFC, WPF, .NET), Active Directory, OS security
Windows/UNIX/AIX/Solaris/H-PUX/Linux kernel programming (preferably file systems)
2+ years of systems programming experience
2+ years of C/C++/C# programming
Knowledge of Compilers, linkers and loaders
Strong troubleshooting skills and the ability to quickly break down and understand complex
issues
Experience with software engineering best practices, including use of version control
systems, change and defect tracking tools, and test automation tools
Excellent verbal and written communication skills
Experience dealing with new OS Version for example Windows versions such as Windows-
7, 2k8 etc
Prior professional software development experience
Understanding and Appreciation of User Experience Design
Bachelors in CS or equivalent work experience
Experience working with virtualization platforms such as VMware a plus


With Regards
Harleen Panesar
Sr.Relationship Executive
New Era India Consultancy Pvt. Ltd
Direct :- +91 011 - 40888470 MB: 09711151411
Kalka Ji,New-Delhi 110019
Mail@ :- harleen.kaur@neweraindia.com
Weblink:- www.neweraindia.com

मंगलवार, 10 अगस्त 2010

c++

What are the different types of Storage classes?

 There are four types of storage class: automatic, register, external, and static

auto - > normal datatypes
Extern - > For accessing variables which is not in current file
Static -> Allocated single memory for all objects
Register -> Faster access of variables,since it will be stored in registers.
========
Explain the need for "Virtual Destructor".
In case of inheritance, objects should be destructed exactly the oppossite way of their construction. If virtual keyword is not added before base class destructor declaration, then derived class destrcutor will not at all be called. Hence there will be memory leakage if allocated for derived class members while constrcting the object.

A virtual destructor ensures a proper calling order for the destructors in class hierarchy.

class base
{
public:
virtual ~base()
{
}
};
class derived:public base
{
int *ptr;
public:
derived()
{
ptr=new int;
}
~derived()
{
delete ptr;
}
};
void main()
{
base *bptr=new derived;
delete bptr;
}

here in main() when we are creating an object of the derived class using new it's constructor gets called.So the memory is allocated and the address returned by new is stored in the base class pointer.when delete is called on bptr, the destructor of base class would be called and delete ptr will never be executed.
to correct this problem we must declare the base class destructor as virtual.
===============
They are different. C struct can only contain data while C++ struct can contain functions and access limitation such as public, private etc just as a class (not totally the same as class!)

C structure:::
1)//single line comment about the programme.
2)headerfiles
3)macros,typedef...
4)global declarations
5)main()
6)statements.......


C++ STRUCTURE:::

1)/*multi line comments*/
2)headerfiles
3)typedef,inline functions
4)global variables.
5)class starts
6)acces specifiers
7)statements.....
8)class ends with an ;
9)main starts
10)statements.
11)main ends
=========
Difference between a "assignment operator" and a "copy constructor"

Copy constructor is called every time a copy of an object is made. When you pass an object by value, either into a function or as a function's return value, a temporary copy of that object is made.

Assignment operator is called whenever you assign to an object. Assignment operator must check to see if the right-hand side of the assignment operator is the object itself. It executes only the two sides are not equal
=============
A copy constructor is used to initialize a newly declared variable from an existing variableA variable is declared which is initialized from another object,A value parameter is initialized from its corresponding argument.f(p); // copy constructor initializes formal value parameter. An object is returned by a function.C++ calls a copy constructor to make a copy of an object in each of the above cases. If there is no copy constructor defined for the class, C++ uses the default copy constructor which copies each field, ie, makes a shallow copy.
====================
The free subroutine frees a block of memory previously allocated by the malloc subroutine.
Undefined results occur if the Pointer parameter is not a valid pointer. If the Pointer parameter is a
null value, no action will occur. The realloc subroutine changes the size of the block of memory
pointed to by the Pointer parameter to the number of bytes specified by the Size parameter and
returns a new pointer to the block. The pointer specified by the Pointer parameter must have been
created with the malloc, calloc, or realloc subroutines and not been deallocated with the free or
realloc subroutines. Undefined results occur if the Pointer parameter is not a valid pointer.

the realloc() function is used to resize memory.
where as free() function is used free the memory which is allocated by malloc(),calloc() functions.
==================
What are the types of STL containers?

deque
hash_map
hash_multimap
hash_multiset
hash_set
list
map
multimap
multiset
set
vector
============
Have you heard of "mutable" keyword?
The mutable keyword can only be applied to non-static and non-const data members of a class. If a data member is declared mutable, then it is legal to assign a value to this data member from a const member function.

SEE FOLLOWING CODE :-

********************************************

class Mutable
{
private :
int m_iNonMutVar;
mutable int m_iMutVar;

public:
Mutable();
void TryChange() const;

};

Mutable::Mutable():m_iNonMutVar(10),m_iMutVar(20) {};

void Mutable::TryChange() const
{
m_iNonMutVar = 100; // THis will give ERROR
m_iMutVar = 200; // This will WORK coz it is mutable
}


when we create a const object then none of it's data members can be changed.But if we want to change some of the data member of a const object we can do so by using mutable keyword.

ex:

class sample
{
private:
mutable int i;

public:
sample(int x=0)
{
i=x;
}
void fun()const
{
i++;
cout<
}
};

void main()
{
const sample s(15);
s.fun();
}
=============
what is RTTI in c++?

Run-time type information (RTTI) is a mechanism that allows the type of an object to be determined during program execution. RTTI was added to the C++ language because many vendors of class libraries were implementing this functionality themselves. This caused incompatibilities between libraries. Thus it became obvious that support for run-time type information was needed at the language level.
There are three main C++ language elements to run-time type information:
1)The dynamic_cast operator. :- Used for conversion of polymorphic types.

class B { ... };
class C : public B { ... };
class D : public C { ... };

void f(D* pd)
{
C* pc dynamic_cast(pd);// ok: C is a direct base class
// pc points to C subobject of pd

B* pb dynamic_cast(pd);// ok: B is an indirect base class
// pb points to B subobject of pd
...
}
---------------------------------------------------------------------------
2)The typeid operator :- Used for identifying the exact type of an object.

3)The type_info class :- Used to hold the type information returned by the typeid operator.The type_info class describes type information generated within the program by the compiler.

NOTE:- TRY THIS EXAMPLE IS COMPILE AND LINK USING MSVC++ 6.0 CL.EXE Compiler and Linker PLS CHECK SAME typid available in UNIX based C++;
**************************************************************************************
#include stdio.h
#include

class A
{
public:
};

class B:public A
{
public:
};

int main(int argc char* argv[])
{
int iVal int();
float fVal float();
char cVal char();
A a a1;
B b;

const type_info& t_iVal typeid(iVal); // Holds Simple int DataType Info
const type_info& t_iValRef typeid(&iVal); // Holds pointer type_info
printf( \n Type Info of iVal s\n t_iVal.name());
printf( \n Type Info of &iVal s\n t_iValRef.name());

printf( \n Type Info of fVal s\n typeid(fVal).name());
printf( \n Type Info of &fVal s\n typeid(&fVal).name());

printf( \n Type Info of cVal s\n typeid(cVal).name());
printf( \n Type Info of &cVal s\n\n typeid(&cVal).name());

printf( \n Type Info of a s\n typeid(a).name());
printf( \n Type Info of b s\n\n typeid(b).name());

if(typeid(a) typeid(a1))
{
printf( \n BOTH INSTANCES a AND a1 BELONGS TO SAME CLASS \n\n );
}

return 0;
}
===========
For C language when the host call a program the address of main is stored and as soon as we give return 0,this is an indication to the host that the program terminated normally.Hence main internally do some system calls to execute the program and returns 0 as it terminate normally or give any number other than ZERO to give signal to the host that it terminated abnormally.

=============
What is a static function?
===============

Difference between Funtion to pointer and pointer to function
just go thru the two examples written below, to clarify this doubt

ex of pointer to function:

int (*function name)(Argument1,Argument2..)

The above decleration explains that its an pointer to a function whose return type is an intiger.

ex of function to pointer :

int *function name(Argument1,Argument2..)

The above declaration expalains that a function teturns a pointer to an intiger quantity.
==============
We should not read after a write to a file without an intervening call to fflush(), fseek() or rewind()


Sending the unlimited number of arguments to a function can be done by "Variable number of arguments" concept in C.

These function definitions will contain an ellipse(...) which indicates the variable number of argument. example

#include
#include

void eprintf (const char *template, ...)
{
va_list ap;
extern char *program_invocation_short_name;

fprintf (stderr, "%s: ", program_invocation_short_name);
va_start (ap, template);
vfprintf (stderr, template, ap);
va_end (ap);
}

Here va_Start() initializes variable argument list pointer ap to the beginning of the variable argument list, before any calls to va_arg().

The va_arg() macro returns the next argument in the variable argument list pointed to by ap.
=============
What is abstraction?
     Abstraction is of the process of hiding unwanted details from the user.

Abstraction is the process of representing only the important features and hiding the background details and explanations from the user.
============
plz tell how can i attach oracle database with c++?
 Binding of data and functions is an OOPS concept, Encapsulation. Class is the concept which does this along with the help of the access Specifiers. for e.g the private members of a class could just be seen by the member functions of that class only. and when u want to access any of the data member or the member function, u have to go through the class object/name only.
So the data and the function which accesses that data are binded to each other
===========
Differentiate Aggregation and containment?

Aggregation is the relationship between the whole and a part. We can add/subtract some properties in the part (slave) side. It won?t affect the whole part.
Best example is Car, which contains the wheels and some extra parts. Even though the parts are not there we can call it as car.
But, in the case of containment the whole part is affected when the part within that got affected. The human body is an apt example for this relationship. When the whole body dies the parts (heart etc) are died.
===========
What are the different types of polymorphism?

There are two types of Polymorphism namely:
Compile Time Polymorphism
Run Time Polymorphism.
Compile Time Polymorphism is acheived through function overloading and operator overloading.
Run Time Polymorphism is acheived througn virtual Function.

-===============
What are 2 ways of exporting a function from a DLL?

1.Taking a reference to the function from the DLL instance.
2. Using the DLL ?s Type Library
===========

How to Set a Memory Change Breakpoint
From the Debug Menu, choose New Breakpoint and click New Data Breakpoint
—or—
in the Breakpoints window Menu, click the New dropdown and choose New Data Breakpoint.
The New Breakpoint dialog box appears.
In the Address box, enter a memory address or expression that evaluates to a memory address. For example, &foo to break when the contents of variable foo change.
In the Byte Count box, enter the number of bytes you want the debugger to watch. For example, if you enter 4, the debugger will watch the four bytes starting at &foo and break if any of those bytes change value.
================
Difference between function overloading and operator overloading:

Function overloading is the practice of declaring the same function with different signatures. The same function name will be used with different number of parameters and parameters of different type. But overloading of functions with different return types are not allowed.

On the other hand operator overloading is:

Operating overloading allows you to pass different variable types to the same function and produce different results. In this article Ben gives us the low-down on operator overloading in C++.Operator overloading is common-place among many efficient C++ programmers. It allows you to use the same function name, but as different functions.
=============
10.) Is there any way to write a class so that no class can be
iinherited from it ?
STATIC CLASS
==================












1.Why constructor can't be virtual?
2.What is the difference between destructor and virtual destructors?
 





Given a Binary Search Tree, write a program to print the kth smallest element without using any static/global variable. You can’t pass the value k to any function also.
What are the 4 basics of OOP?
Define Data Abstraction. What is its importance?
Given an array of size n. It contains numbers in the range 1 to n. Each number is present at least once except for 2 numbers. Find the missing numbers.
Given an array of size n. It contains numbers in the range 1 to n. Find the numbers which aren’t present.
Given a string,find the first un-repeated character in it? Give some test cases
You are given a dictionary of all valid words. You have the following 3 operations permitted on a word: delete a character, insert a character, replace a character. Now given two words - word1 and word2 - find the minimum number of steps required to convert word1 to word2. (one operation counts as 1 step.)
Given a cube of size n*n*n (i.e made up of n^3 smaller cubes), find the number of smaller cubes on the surface. Extend this to k-dimension.
What is a C array and illustrate the how is it different from a list.
What is the time and space complexities of merge sort and when is it preferred over quick sort?
Write a function which takes as parameters one regular expression(only ? and * are the special characters) and a string and returns whether the string matched the regular expression.
Given n red balls and m blue balls and some containers, how would you distribute those balls among the containers such that the probability of picking a red ball is maximized, assuming that the user randomly chooses a container and then randomly picks a ball from that.
Find the second largest element in an array with minimum no of comparisons and give the minimum no of comparisons needed on an array of size N to do the same.
Given an array of size n, containing every element from 1 to n+1, except one. Find the missing element.
How do you convert a decimal number to its hexa-decimal equivalent.Give a C code to do the same
Explain polymorphism. Provide an example.
Given an array all of whose elements are positive numbers, find the maximum sum of a subsequence with the constraint that no 2 numbers in the sequence should be adjacent in the array. So 3 2 7 10 should return 13 (sum of 3 and 10) or 3 2 5 10 7 should return 15 (sum of 3, 5 and 7)
You are given some denominations of coins in an array (int denom[])and infinite supply of all of them. Given an amount (int amount), find the minimum number of coins required to get the exact amount. What is the method called?
Given an array of size n. It contains numbers in the range 1 to n. Each number is present at least once except for 1 number. Find the missing number.


C++ interview questions and answers
By admin | December 8, 2007
What is the most efficient way to reverse a linklist?
How to sort & search a single linklist?
Which is more convenient - single or double-linked linklist? Discuss the trade-offs? What about XOR-linked linklist?
How does indexing work?
char s[10];
s=”Hello”;
printf(s);
What will be the output? Is there any error with this code?
What is the difference between
char s[]=”Hello”;
char *s=”Hello”;
Please give a clear idea on this?
Why do we pass a reference for copy constructors? If it does shallow copy for pass by value (user defined object), how will it do the deep copy?
What is the difference between shallow copy & deep copy?
What is the difference between strcpy and memcpy? What rule should we follow when choosing between these two?
If we declare two variable and two applications are using the same variable, then what will its value be, will it be the same?
This entry was posted in C++. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.

Some C++ interview questions
By admin | October 2, 2007
What is a void return type?
How is it possible for two String objects with identical values not to be equal under the == operator?
What is the difference between a while statement and a do statement?
Can a for statement loop indefinitely?
How do you link a C++ program to C functions?
How can you tell what shell you are running on UNIX system?
How do you find out if a linked-list has an end? (i.e. the list is not a cycle)
How do you write a function that can reverse a linked-list?
Can a copy constructor accept an object of the same class as parameter, instead of reference of the object?
What is a local class?
What is a nested class?
What are the access privileges in C++? What is the default access level?
What is multiple inheritance(virtual inheritance)? What are its advantages and disadvantages?
How do you access the static member of a class?
What does extern int func(int *, Foo) accomplish?

Explain the flow of SDI application?
Answer
# 1 CwinApp->CDocument->CFrameWnd->CView
  

   Re: What is the base class for MFC Framework ?
Answer
# 1 CObject class
  

Re: What is model and modeless dialog box ? Give some examples?
Answer
# 1 Modal dialog is one which will not allow u to access any thing until this dialog is active.

Call:
Dialog::DoModal()

And reverse of this ur modeless dialog.
Dialog::ShowDialog();

For Example:
Modal Dialog:
When we access Menu items such as Save as, Open, attach file, in any application, we can not able to access any part of the application execpt the active dialog.

When we open add remove program for uninstalling any application, u will get a Uninstallation dialog which will be modeles. bcz still u were able to access add remove programs. (this is probably in Vista. And in XP its modal dialog which they have used)
 




Re: what is the use of CWinApp class?
Answer
# 1 CWinApp is an application object provides member functions
for initializing your application (and each instance of it)
and for running the application.

Each application that uses the Microsoft Foundation classes
can only contain one object derived from CWinApp. This
object is constructed when other C++ global objects are
constructed and is already available when Windows calls the
WinMain function, which is supplied by the Microsoft
Foundation Class Library. Declare your derived CWinApp
object at the global level.

When you derive an application class from CWinApp, override
the InitInstance member function to create your
application's main window object.

In addition to the CWinApp member functions, the Microsoft
Foundation Class Library provides the following global
functions to access your CWinApp object and other global
information:

•AfxGetApp   Obtains a pointer to the CWinApp object.

•AfxGetInstanceHandle   Obtains a handle to the current
application instance.

•AfxGetResourceHandle   Obtains a handle to the
application's resources.

•AfxGetAppName   Obtains a pointer to a string containing
the application's name. Alternately, if you have a pointer
to the CWinApp object, use m_pszExeName to get the
application's name.
  


  Re: What is stack size in win32 program?
Answer
# 1 1mb



Re: If application hangs while SendMessage is waiting for the result, how you handle it?
Answer
# 1 Instead of SendMessage API i will use the SendMessageTimeout
API to solve the system hang



   Re: If application hangs while SendMessage is waiting for the result, how you handle it?
Answer
# 2 You can use PostMessage API instead.



Re: How can update edit control data of an executing application from other application?
Answer
# 1 THERE IS A FUNCTION CWnd::UpdateData()
which update the control data ,the argument paas into it is bool type if it is true then it updates controls from the data members and if it is false then update from the data members to the dialog controls
  


Re: How you find memory leaks?
Answer
# 1 There many ways to find memory leaks, One of the ways is by
using MFC class. And another way is using Purify tools...

CMemorState is a MFC class by which we can find the memory
leaks. Below is a sample code to find the same.

#ifdef _DEBUG
    CMemoryState oldState, newState, diffState;
    oldState.Checkpoint();
#endif
    int* a = new int[10];
#ifdef _DEBUG   
    newState.Checkpoint();
    if(diffState.Difference(oldState, newState))
    {
        TRACE0("Memory Leaked");
    }
#endif
 
Is This Answer Correct ?    3 Yes 0 No
 0  Jawahar

  
   Re: How you find memory leaks?
Answer
# 2 There  is a pretty easiest way to identify whether ur application is having any memory leak. By using the macro called DEBUG_NEW

define this macro on each of urs cpp file.
like #define DEBUG_NEW

then if u run once ur application in debug mode, it will show each variable which has not been released the memory properly.
  


What is primitive and non-primitive application?



How to find the mouse entering an image?..while entering
need to display next image...

 Question Submitted By :: Siva.lm88
I also faced this Question!!      Rank  Answer Posted By  
 
   Re: How to find the mouse entering an image?..while entering need to display next image...
Answer
# 1 BY USING THE TOOL TIP PROPERTY WE CAN KNOW THE WHERE THE
MOUSE POINT IS LOCATED NOW.



Explain in sort What is CTargetObject?



Multithreading Tutorial
By John Kopplin | 28 Dec 2006
This article demonstrates how to write a multithreaded Windows program in C++ using only the Win32 API.
Download source and demo projects - 425 KB
Background
When you run two programs on an Operating System that offers memory protection, as Windows and UNIX/Linux do, the two programs are executed as separate processes, which means they are given separate address spaces. This means that when program #1 modifies the address 0x800A 1234 in its memory space, program #2 does not see any change in the contents of its memory at address 0x800A 1234. With simpler Operating Systems that cannot accomplish this separation of processes, a faulty program can bring down not only itself but other programs running on that computer (including the Operating System itself).
The ability to execute more than one process at a time is known as multi-processing. A process consists of a program (usually called the application) whose statements are performed in an independent memory area. There is a program counter that remembers which statement should be executed next, and there is a stack which holds the arguments passed to functions as well as the variables local to functions, and there is a heap which holds the remaining memory requirements of the program. The heap is used for the memory allocations that must persist longer than the lifetime of a single function. In the C language, you use malloc to acquire memory from the heap, and in C++, you use the new keyword.
Sometimes, it is useful to arrange for two or more processes to work together to accomplish one goal. One situation where this is beneficial is where the computer's hardware offers multiple processors. In the old days this meant two sockets on the motherboard, each populated with an expensive Xeon chip. Thanks to advances in VLSI integration, these two processor chips can now fit in a single package. Examples are Intel's "Core Duo" and AMD's "Athlon 64 X2". If you want to keep two microprocessors busy working on a single goal, you basically have two choices:
design your program to use multiple processes (which usually means multiple programs), or
design your program to use multiple threads.
So, what's a thread? A thread is another mechanism for splitting the workload into separate execution streams. A thread is lighter weight than a process. This means, it offers less flexibility than a full blown process, but can be initiated faster because there is less for the Operating System to set up. What's missing? The separate address space is what is missing. When a program consists of two or more threads, all the threads share a single memory space. If one thread modifies the contents of the address 0x800A 1234, then all the other threads immediately see a change in the contents of their address 0x800A 1234. Furthermore, all the threads share a single heap. If one thread allocates (via malloc or new) all of the memory available in the heap, then attempts at additional allocations by the other threads will fail.
But each thread is given its own stack. This means, thread #1 can be calling FunctionWhichComputesALot() at the same time that thread #2 is calling FunctionWhichDrawsOnTheScreen(). Both of these functions were written in the same program. There is only one program. But, there are independent threads of execution running through that program.
What's the advantage? Well, if your computer's hardware offers two processors, then two threads can run simultaneously. And even on a uni-processor, multi-threading can offer an advantage. Most programs can't perform very many statements before they need to access the hard disk. This is a very slow operation, and hence the Operating System puts the program to sleep during the wait. In fact, the Operating System assigns the computer's hardware resources to somebody else's program during the wait. But, if you have written a multi-threaded program, then when one of your threads stalls, your other threads can continue.
The Jaeschke Magazine Articles
One good way to learn any new programming concept is to study other people's code. You can find source code in magazine articles, and posted on the Internet at sites such as CodeProject. I came across some good examples of multi-threaded programs in two articles written for the C/C++ Users Journal, by Rex Jaeschke. In the October 2005 issue, Jaeschke wrote an article entitled "C++/CLI Threading: Part 1", and in the November 2005 issue, he wrote his follow-up article entitled "C++/CLI Threading: Part 2". Unfortunately, the C/C++ Users Journal magazine folded shortly after these articles appeared. But, the original articles and Jaeschke's source code are still available at the following websites:
Part 1
Part 2
You'll notice that the content from the defunct C/C++ Users Journal has been integrated into the Dr. Dobb's Portal website, which is associated with Dr. Dobb's Journal, another excellent programming magazine.
You might not be familiar with the notation C++/CLI. This stands for "C++ Common Language Infrastructure" and is a Microsoft invention. You're probably familiar with Java and C#, which are two languages that offer managed code where the Operating System rather than the programmer is responsible for deallocating all memory allocations made from the heap. C++/CLI is Microsoft's proposal to add managed code to the C++ language.
I am not a fan of this approach, so I wasn't very interested in Jaeschke's original source code. I am sure Java and C# are going to hang around, but C++/CLI attempts to add so many new notations (and concepts) on top of C++, which is already a very complicated language, that I think this language will disappear.
But, I still read the original C/C++ Users Journal article and thought Jaeschke had selected good examples of multi-threading. I especially liked how his example programs were short and yet displayed data corruption when run without the synchronization methods that are required for successful communication between threads. So, I sat down and rewrote his programs in standard C++. This is what I am sharing with you now. The source code I present could also be written in standard C. In fact, that's easier than accomplishing it in C++ for a reason we will get to in just a minute.
This is probably the right time to read Jaeschke's original articles, since I don't plan to repeat his great explanations of multitasking, reentrancy, atomicity, etc. For example, I don't plan to explain how a program is given its first thread automatically and all additional threads must be created by explicit actions by the program (oops). The URLs where you can find Jaeschke's two articles are given above.
Creating Threads Under Windows
It is unfortunate that the C++ language didn't standardize the method for creating threads. Therefore, various compiler vendors invented their own solutions. If you are writing a program to run under Windows, then you will want to use the Win32 API to create your threads. This is what I will demonstrate. The Win32 API offers the following function to create a new thread:
 Collapse  Copy Code
uintptr_t _beginthread(
   void( __cdecl *start_address )( void * ),
   unsigned stack_size,
   void *arglist
);
This function signature might look intimidating, but using it is easy. The _beginthread() function takes three passed parameters. The first is the name of the function which you want the new thread to begin executing. This is called the thread's entry-point-function. You get to write this function, and the only requirements are that it take a single passed parameter (of type void*) and that it returns nothing. That is what is meant by the function signature:
 Collapse  Copy Code
void( __cdecl *start_address )( void * ),
The second passed parameter to the _beginthread() function is a requested stack size for the new thread (remember, each thread gets its own stack). However, I always set this parameter to 0, which forces the Windows Operating System to select the stack size for me, and I haven't had any problems with this approach. The final passed parameter to the _beginthread() function is the single parameter you want passed to the entry-point-function. This will be made clear by the following example program:
 Collapse  Copy Code
#include
#include
#include      // needed for _beginthread()

void  silly( void * );   // function prototype

int main()
{
    // Our program's first thread starts in the main() function.

    printf( "Now in the main() function.\n" );

    // Let's now create our second thread and ask it to start
    // in the silly() function.


    _beginthread( silly, 0, (void*)12 );

    // From here on there are two separate threads executing
    // our one program.

    // This main thread can call the silly() function if it wants to.

    silly( (void*)-5 );
    Sleep( 100 );
}

void  silly( void *arg )
{
    printf( "The silly() function was passed %d\n", (INT_PTR)arg ) ;
}
Go ahead and compile this program. Simply request a Win32 Console Program from Visual C++ .NET 2003's New Project Wizard and then "Add a New item" which is a C++ source file (.CPP file) in which you place the statements I have shown. I am providing Visual C++ .NET 2003 workspaces for Jaeschke's (modified) programs, but you need to know the key to starting a multi-threaded program from scratch: you must remember to perform one modification to the default project properties that the New Project Wizard gives you. Namely, you must open up the Project Properties dialog (select "Project" from the main Visual C++ menu and then select "Properties"). In the left hand column of this dialog, you will see a tree view control named "Configuration Properties", with the main sub-nodes labeled "C/C++", "Linker", etc. Double-click on the "C/C++" node to open this entry up. Then, click on "Code Generation". In the right hand area of the Project Properties dialog, you will now see listed "Runtime Library". This defaults to "Single Threaded Debug (/MLd)". [The notation /MLd indicates that this choice can be accomplished from the compiler command line using the /MLd switch.] You need to click on this entry to observe a drop-down list control, where you must select Multi-threaded Debug (/MTd). If you forget to do this, your program won't compile, and the error message will complain about the _beginthread() identifier.
A very interesting thing happens if you comment out the call to the Sleep() function seen in this example program. Without the Sleep() statement, the program's output will probably only show a single call to the silly() function, with the passed argument -5. This is because the program's process terminates as soon as the main thread reaches the end of the main() function, and this may occur before the Operating System has had the opportunity to create the other thread for this process. This is one of the discrepancies from what Jaeschke says concerning C++/CLI. Evidently, in C++/CLI, each thread has an independent lifetime, and the overall process (which is the container for all the threads) persists until the last thread has decided to die. Not so for straight C++ Win32 programs: the process dies when the primary thread (the one that started in the main function) dies. The death of this thread means the death of all the other threads.
Using a C++ Member Function as the Thread's Entry-Point-Function
The example program I just listed really isn't a C++ program because it doesn't use any classes. It is just a C language program. The Win32 API was really designed for the C language, and when you employ it with C++ programs, you sometimes run into difficulties. Such as this difficulty: "How can I employ a class member function (a.k.a. an instance function) as the thread's entry-point-function?"
If you are rusty on your C++, let me remind you of the problem. Every C++ member function has a hidden first passed parameter known as the this parameter. Via the this parameter, the function knows which instance of the class to operate upon. Because you never see these this parameters, it is easy to forget they exist.
Now, let's again consider the _beginthread() function which allows us to specify an arbitrary entry-point-function for our new thread. This entry-point-function must accept a single void* passed param. Aye, there's the rub. The function signature required by _beginthread() does not allow the hidden this parameter, and hence a C++ member function cannot be directly activated by _beginthread().
We would be in a bind were it not for the fact that C and C++ are incredibly expressive languages (famously allowing you the freedom to shoot yourself in the foot) and the additional fact that _beginthread() does allow us to specify an arbitrary passed parameter to the entry-point-function. So, we use a two-step procedure to accomplish our goal: we ask _beginthread() to employ a static class member function (which, unlike an instance function, lacks the hidden this parameter), and we send this static class function the hidden this pointer as a void*. The static class function knows to convert the void* parameter to a pointer of a class instance. Voila! We now know which instance of the class should call the real entry-point-function, and this call completes the two step process. The relevant code (from Jaeschke's modified Part 1 Listing 1 program) is shown below:
 Collapse  Copy Code
class ThreadX
{
public:

  // In C++ you must employ a free (C) function or a static
  // class member function as the thread entry-point-function.

  static unsigned __stdcall ThreadStaticEntryPoint(void * pThis)
  {
      ThreadX * pthX = (ThreadX*)pThis;   // the tricky cast

      pthX->ThreadEntryPoint();    // now call the true entry-point-function

      // A thread terminates automatically if it completes execution,
      // or it can terminate itself with a call to _endthread().

      return 1;          // the thread exit code
  }

  void ThreadEntryPoint()
  {
     // This is the desired entry-point-function but to get
     // here we have to use a 2 step procedure involving
     // the ThreadStaticEntryPoint() function.

  }
}
Then, in the main() function, we get the two step process started as shown below:
 Collapse  Copy Code
hth1 = (HANDLE)_beginthreadex( NULL, // security
                      0,             // stack size
                      ThreadX::ThreadStaticEntryPoint,// entry-point-function
                      o1,           // arg list holding the "this" pointer
                      CREATE_SUSPENDED, // so we can later call ResumeThread()
                      &uiThread1ID );
Notice that I am using _beginthreadex() rather than _beginthread() to create my thread. The "ex" stands for "extended", which means this version offers additional capability not available with _beginthread(). This is typical of Microsoft's Win32 API: when shortcomings were identified, more powerful augmented techniques were introduced. One of these new extended capabilities is that the _beginthreadex() function allows me to create but not actually start my thread. I elect this choice merely so that my program better matches Jaeschke's C++/CLI code. Furthermore, _beginthreadex() allows the entry-point-function to return an unsigned value, and this is handy for reporting status back to the thread creator. The thread's creator can access this status by calling GetExitCodeThread(). This is all demonstrated in the "Part 1 Listing 1" program I provide (the name comes from Jaeschke's magazine article).
At the end of the main() function, you will see some statements which have no counterpart in Jaeschke's original program. This is because in C++/CLI, the process continues until the last thread exits. That is, the threads have independent lifetimes. Hence, Jaeschke's original code was designed to show that the primary thread could exit and not influence the other threads. However, in C++, the process terminates when the primary thread exits, and when the process terminates, all its threads are then terminated. We force the primary thread (the thread that starts in the main() function) to wait upon the other two threads, via the following statements:
 Collapse  Copy Code
    WaitForSingleObject( hth1, INFINITE );
    WaitForSingleObject( hth2, INFINITE );
If you comment out these waits, the non-primary threads will never get a chance to run because the process will die when the primary thread reaches the end of the main() function.
Synchronization Between Threads
In the Part 1 Listing 1 program, the multiple threads don't interact with one another, and hence they cannot corrupt each other's data. The point of the Part 1 Listing 2 program is to demonstrate how this corruption comes about. This type of corruption is very difficult to debug, and this makes multi-threaded programs very time consuming if you don't design them correctly. The key is to provide synchronization whenever shared data is accessed (either written or read).
A synchronization object is an object whose handle can be specified in one of the Win32 wait functions such as WaitForSingleObject(). The synchronization objects provided by Win32 are:
event
mutex or critical section
semaphore
waitable timer
An event notifies one or more waiting threads that an event has occurred.
A mutex can be owned by only one thread at a time, enabling threads to coordinate mutually exclusive access to a shared resource. The state of a mutex object is set to signaled when it is not owned by any thread, and to nonsignaled when it is owned by a thread. Only one thread at a time can own a mutex object, whose name comes from the fact that it is useful in coordinating mutually exclusive access to a shared resource.
Critical section objects provide synchronization similar to that provided by mutex objects, except that critical section objects can be used only by the threads of a single process (hence they are lighter weight than a mutex). Like a mutex object, a critical section object can be owned by only one thread at a time, which makes it useful for protecting a shared resource from simultaneous access. There is no guarantee about the order in which threads will obtain ownership of the critical section; however, the Operating System will be fair to all threads. Another difference between a mutex and a critical section is that if the critical section object is currently owned by another thread, EnterCriticalSection() waits indefinitely for ownership whereas WaitForSingleObject(), which is used with a mutex, allows you to specify a timeout.
A semaphore maintains a count between zero and some maximum value, limiting the number of threads that are simultaneously accessing a shared resource.
A waitable timer notifies one or more waiting threads that a specified time has arrived.
This Part 1 Listing 2 program demonstrates the Critical Section synchronization object. Take a look at the source code now. Note that in the main() function, we create two threads and ask them both to employ the same entry-point-function, namely the function called StartUp(). However, because the two object instances (o1 and o2) have different values for the mover class data member, the two threads act completely different from each other. Because in one case isMover = true and in the other case isMover = false, one of the threads continually changes the Point object's x and y values while the other thread merely displays these values. But, this is enough interaction that the program will display a bug if used without synchronization.
Compile and run the program as I provide it to see the problem. Occasionally, the print out of x and y values will show a discrepancy between the x and y values. When this happens, the x value will be 1 larger than the y value. This happens because the thread that updates x and y was interrupted by the thread that displays the values between the moments when the x value was incremented and when the y value was incremented.
Now, go to the top of the Main.cpp file and find the following statement:
 Collapse  Copy Code
//#define WITH_SYNCHRONIZATION
Uncomment this statement (that is, remove the double slashes). Then, re-compile and re-run the program. It now works perfectly. This one change activates all of the critical section statements in the program. I could have just as well used a mutex or a semaphore, but the critical section is the most light-weight (hence fastest) synchronization object offered by Windows.
The Producer/Consumer Paradigm
One of the most common uses for a multi-threaded architecture is the familiar producer/consumer situation where there is one activity to create packets of stuff and another activity to receive and process those packets. The next example program comes from Jaeschke's Part 2 Listing 1 program. An instance of the CreateMessages class acts as the producer, and an instance of the ProcessMessages class acts as the consumer. The producer creates exactly five messages and then commits suicide. The consumer is designed to live indefinitely, until commanded to die. The primary thread waits for the producer thread to die, and then commands the consumer thread to die.
The program has a single instance of the MessageBuffer class, and this one instance is shared by both the producer and the consumer threads. Via synchronization statements, this program guarantees that the consumer thread can't process the contents of the message buffer until the producer thread has put something there, and that the producer thread can't put another message there until the previous one has been consumed.
Since my Part 1 Listing 2 program demonstrates a critical section, I elected to employ a mutex in this Part 2 Listing 1 program. As with the Part 1 Listing 2 example program, if you simply compile and run the Part 2 Listing 1 program as I provide it, you will see that it has a bug. Whereas the producer creates the five following messages:
 Collapse  Copy Code
1111111111
2222222222
3333333333
4444444444
5555555555
the consumer receives the five following messages:
 Collapse  Copy Code
1
2111111111
3222222222
4333333333
5444444444
There is clearly a synchronization problem: the consumer is getting access to the message buffer as soon as the producer has updated the first character of the new message. But the rest of the message buffer has not yet been updated.
Now, go to the top of the Main.cpp file and find the following statement:
 Collapse  Copy Code
//#define WITH_SYNCHRONIZATION
Uncomment this statement (that is, remove the double slashes). Then, re-compile and re-run the program. It now works perfectly.
Between the English explanation in Jaeschke's original magazine article and all the comments I have put in my C++ source code, you should be able to follow the flow. The final comment I will make is that the GetExitCodeThread() function returns the special value 259 when the thread is still alive (and hence hasn't really exited). You can find the definition for this value in the WinBase header file:
 Collapse  Copy Code
#define STILL_ACTIVE   STATUS_PENDING
where you can find STATUS_PENDING defined in the WinNT.h header file:
 Collapse  Copy Code
#define STATUS_PENDING    ((DWORD   )0x00000103L)
Note that 0x00000103 = 259.
Thread Local Storage
Jaeschke's Part 2 Listing 3 program demonstrates thread local storage. Thread local storage is memory that is accessible only to a single thread. At the start of this article, I said that an Operating System could initiate a new thread faster than it could initiate a new process because all threads share the same memory space (including the heap) and hence there is less that the Operating System needs to set up when creating a new thread. But, here is the exception to that rule. When you request thread local storage, you are asking the Operating System to erect a wall around certain memory locations in order that only a single one of the threads may access that memory.
The C++ keyword which declares that a variable should employ thread local storage is __declspec(thread).
As with my other example programs, this one will display an obvious synchronization problem if you compile and run it unchanged. After you have seen the problem, go to the top of the Main.cpp file and find the following statement:
 Collapse  Copy Code
//#define WITH_SYNCHRONIZATION
Uncomment this statement (that is, remove the double slashes). Then, re-compile and re-run the program. It now works perfectly.
Atomicity
Jaeschke's Part 2 Listing 4 program demonstrates the problem of atomicity, which is the situation where an operation will fail if it is interrupted mid-way through. This usage of the word "atomic" relates back to the time when an atom was believed to be the smallest particle of matter and hence something that couldn't be further split. Assembly language statements are naturally atomic: they cannot be interrupted half-way through. This is not true of high-level C or C++ statements. Whereas you might consider an update to a 64 bit variable to be an atomic operation, it actually isn't on 32 bit hardware. Microsoft's Win32 API offers the InterlockedIncrement() function as the solution for this type of atomicity problem.
This example program could be rewritten to employ 64 bit integers (the LONGLONG data type) and the InterlockedIncrement64() function if it only needed to run under a Windows 2003 Server. But, alas, Windows XP does not support InterlockedIncrement64(). Hence, I was originally worried that I wouldn't be able to demonstrate an atomicity bug in a Windows XP program that dealt only with 32 bit integers. But, curiously, such a bug can be demonstrated as long as we employ the Debug mode settings in the Visual C++ .NET 2003 compiler rather than the Release mode settings. Therefore, you will notice that unlike the other example programs inside the .ZIP file that I distribute, this one is set for a Debug configuration.
As with my other example programs, this one will display an obvious synchronization problem if you compile and run it unchanged. After you have seen the problem, go to the top of the Main.cpp file and find the following statement:
 Collapse  Copy Code
static bool interlocked = false;    // change this to fix the problem
Change false to true, and then re-compile and re-run the program. It now works perfectly because it is now employing InterlockedIncrement().
The Example Programs
In order that other C++ programmers can experiment with these multithreaded examples, I make available a .ZIP file holding five Visual C++ .NET 2003 workspaces for the Part 1 Listing 1, Part 1 Listing 2, Part 2 Listing 1, Part 2 Listing 3, and Part 2 Listing 4 programs from Jaeschke's original article (now translated to C++). Enjoy!
Conclusion
==========================================================================================================
Helps programmers new to Winsock start programming TCP sockets in C++
Download demo project - 80.6 KB
Introduction
There really is not a lot of material on this subject (I believe) that explains the use of Windows sockets sufficiently enough for a beginner to understand and begin to program them. I still remember the hassle that I went through trying to find a proper tutorial that didn't leave me hanging with many questions after I started programming with them myself.
That was a long time ago now, and it was quite a challenge for me to program my first application that could communicate with other computers over the Internet  even though my first introduction to sockets was through Visual Basic; a high-level and very user-friendly programming language. Now that I have long since switched to the more powerful C++, I rapidly found that the labor I had expended to code sockets in VB was nothing compared to what awaited!
Thankfully, after many hours searching various web pages on the Internet, I was able to collect all the bits and pieces, and finally compile my first telnet program in C++. My goal is to collect all the necessary data in one place; right here, so the reader doesn't have to recollect all the data over the Internet. Thus, I present this tutorial in hopes that it alone will be sufficient information to begin programming.
Before we begin, you will need to include winsock.h and link libws2_32.a to your project in order to use the API that are necessary for TCP/IP. If this is not possible, use LoadLibrary() to load ws2_32.dll at runtime, or some similar method.
All the code in this article was written and tested using "Bloodshed Dev-C++ 4.9.8.0"; but generally, it should work with any compiler with minimal modifications.
What the Heck are Threads, Ports, and Sockets?
Actually, we can use the word-picture presented to us by the name "socket" in a similar fashion to illustrate what they are and how they work. In an actual mechanical socket, you may recall that it is the female, or "receiving" end of a connection. A "thread" is a symbolic name for a connection between your computer and a remote computer, and a thread is connected to a socket.
In case I've lost you with all that proper terminology, you might think of a thread as an actual, physical, sewing-type thread stretched from one computer to the other, as the common analogy goes. In order for the threads to be attached to each computer, however, there must be a receiving object that attaches to the threads, and these are called sockets.
A socket can be opened on any "port"; which is simply a unique number to distinguish it from other threads, because more than just one connection can be made on the same computer. A few of these ports have been set aside to serve a specific purpose. Beyond these ports, there are quite a large number of other ports that can be used for anything and everything: over 6,000, actually. A few commonly used ports are listed below with their corresponding services:

Port    Service      
7    Ping      
13    Time      
15    Netstat      
22    SSH      
23    Telnet (default)      
25    SMTP (Send mail)      
43    Whois (Query information)      
79    Finger (Query server information)      
80    HTTP (Web pages)      
110    POP (Receive mail)      
119    NNTP      
513    CLOGIN (Used for IP spoofing)   
There are many more ports used for specific purposes that are not shown here. Typically though, if you wish to use a port that has no specific assigned service, any port from 1,000 to 6,535 should be just fine. Of course, if instead you want to listen in on messages sent to and from service ports, you can do that too.
Are you connected to the Internet now? Let's say you are, and you have Internet Explorer or some other web page service running, as well as AOL or some other chat program. On top of that (as if the connection wasn't slow enough already) you're trying to send and receive email. What ports do you think are opened, sending and receiving data?
Internet Explorer (etc.) sends and receives data via port 80
AOL and other instant messaging programs usually like to hang out in the higher unassigned ports up in the thousands to be safe from interference. Each chat program varies, as there is no specific "chat" service and multiple messaging programs may run at the same time
When you're sending your email, you and the remote mail server are communicating using port 25
And, when you receive email, your mail client (such as Microsoft Outlook) uses port 110 to retrieve your mail from the mail server
And onward extends the list.
It's not enough just to know what port number we're using, obviously; we need to know what remote computer/server we're connecting to. Just like we find out the home address of the people we visit before we get in the car, we have to know the "IP address" of the host we are connecting to, if we are connecting and not just listening (a chat program needs to be able to do both).
An IP address is an identification number that is assigned to each computer on the network, and consists of four sets of digits separated by periods. You can view your IP address by running ipconfig.exe at the MSDOS prompt.
For the examples shown throughout this tutorial, we will be using what is called the "loop-back address" to test our chat program without being connected to the Internet. This address is 127.0.0.1. Whenever you try to make a connection to this IP, the computer loops the request back to you computer and attempts to locate a server on the specified port. That way, you can have the server and client running on the same computer. Once you decide to connect to other remote computers, and you've worked the bugs out of your chat program, you will need to get the unique IP address of each to communicate with them over the Internet.
Because we as humans are very capable of forgetting things, and because we couldn't possibly hope to remember a bunch of numbers for every web site we visit, some smart individuals came up the wonderful idea of "domain names". Now, we have neat little names like www.yahoo.com and www.cia.gov that stand for IP addresses that are much easier to remember than clunky sets of digits. When you type one of these names in your browser window, the IP address for that domain name is looked up via a "router", and once it is obtained (or the host is "resolved"), the browser can contact the server residing at that address.
For example, let's say I call an operator because I can't remember my girlfriend's phone number (fat chance). So, I just tell the operator what her name is (and a few other details, but that's not important) and she happily gives me the digits. That's kind of what happens when a request is made for an IP address of any domain name.
We have two API that accomplish this task. It's a good idea to make sure and check to see if whoever uses you program types a domain name instead of an IP address, so your program can look up the correct IP address before continuing. Most people, anyway, won't want to remember any IP addresses, so most likely you'll need to translate domain names into IP addresses before you can establish a connection  which requires that the computer must be connected to the Internet. Then, once you have the address, you're all set to connect.
 Collapse  Copy Code
//Return the IP address of a domain name

DECLARE_STDCALL_P(struct hostent *) gethostbyname(const char*);

//Convert a string address (i.e., "127.0.0.1") to an IP address. Note that 
//this function returns the address into the correct byte order for us so
//that we do not need to do any conversions (see next section)

unsigned long PASCAL inet_addr(const char*);
Byte Order
Just when you thought all this thread-socket stuff was going to be simple and easy, we have to start discussing byte order. This is because Intel computers and network protocols use reversed byte ordering from each other, and we have to covert each port and IP address to network byte order before we send it; else we'll have a big mix up. Port 25, when not reversed, will not end up being port 25 at all. So, we have to make sure we're speaking the same language as the server when we attempt to communicate with it.
Thankfully, we don't have to code all the conversion functions manually; as Microsoft kindly provides us with a few API to do this as well. The four functions that are used to change the byte order of an IP or port number are as follows:
 Collapse  Copy Code
u_long PASCAL htonl(u_long); //Host to network long
u_long PASCAL ntohl(u_long); //Network to host long

u_short PASCAL htons(u_short); //Host to network short
u_short PASCAL ntohs(u_short); //Network to host short
Remember! The "host" computer is the computer that listens for and invites connections to it, and the "network" computer is the visitor that connects to the host.
So, for example, before we specify which port we are going to listen on or connect to, we'll have to use the htons() function to convert the number to network byte order. Note that after using inet_addr() to convert a string IP address to the required form, we will be returned the address in the correct network order, eliminating the need to evoke htonl(). An easy way to differentiate between htons() and htonl() is to think of the port number as the shorter number, and the IP as the longer number (which is true  an IP address consists of four sets of up to three digits separated by periods, versus a single port number).
Firing Up Winsock
OK, now that we've finally covered the basics, hopefully you are starting to see light at the end of the tunnel and we can move on. Don't worry if you don't understand every aspect of the procedure, for many supplementary facts will be brought to light as we progress.
The first step to programming with windows sockets (A.K.A "Winsock") is starting up the Winsock API. There are two versions of Winsock; version one is the older, limited version; and version 2 is the latest edition and is therefore the version we prefer to specify.
 Collapse  Copy Code
#define SCK_VERSION1            0x0101
#define SCK_VERSION2            0x0202

int PASCAL WSAStartup(WORD,LPWSADATA);
int PASCAL WSACleanup(void);

//This typedef will be filled out when the function returns
//with information about the Winsock version

typedef struct WSAData
{
    WORD      wVersion;
    WORD      wHighVersion;
    char      szDescription[WSADESCRIPTION_LEN+1];
    char      szSystemStatus[WSASYS_STATUS_LEN+1];
    unsigned short      iMaxSockets;
    unsigned short      iMaxUdpDg;
    char *       lpVendorInfo;
}
WSADATA;

typedef WSADATA *LPWSADATA;
You should only need to call these functions once each, the former when you initialize Winsock, and the latter when you are finished. Don't close down Winsock until you are finished, though, as doing so would cancel any connections that your program has initiated or any ports that you are listening on.
Initializing a Socket
We understand how sockets work now, hopefully, but up until now we had no idea how to initialize them. The correct parameters must be filled out and passed to a handy API call that begins the socket (hopefully). In this case, we are returned the handle to the socket that we have created. This handle is very "handy" and we must keep it on hand to manipulate the socket's activity.
When you are all finished doing your dirty work, it is considered proper programming practice to shut down any sockets that you have opened before your program exits. Of course, when it does, all the ties and connections it has will be forcibly shut down, including any sockets, but it's better to shut them down the graceful way with closesocket(). You will need to pass the socket's handle to this API when you call it.
 Collapse  Copy Code
//There are many more options than the ones defined here, to see them
//browse the winsock2.h header file

#define SOCK_STREAM      1
#define SOCK_DGRAM      2
#define SOCK_RAW      3

#define AF_INET      2

#define IPPROTO_TCP      6

SOCKET PASCAL socket(int,int,int);
int PASCAL closesocket(SOCKET);
When creating a socket, you will need to pass the "address family", socket "type", and the "protocol type". Unless you're doing some special (or odd) work, which is beyond the scope of this report, you should typically just pass AF_INET as the default address family. This parameter specifies how the computer addresses will be interpreted.
There is more than just one type of socket; actually, there are many more. Three of the most common ones include: Raw Sockets, Stream Sockets, and Datagram Sockets. Stream sockets, however, are what we are using in this tutorial, since we are dealing with TCP protocols, so we will specify SOCK_STREAM as the second parameter to socket().
We're close, so close! We've got the "nitty gritty" stuff done and over with, so let's move on the more exiting parts of Winsock programming.
Connecting to a Remote Host (Acting as the Client)
Let's try out what we've gone over with a simple program that can connect to a remote computer. Doing this will help you to understand much better how everything works, and helps to prevent information overload!
You'll need to fill out information about the remote host that you are connecting to, and then pass a pointer to this structure to the magic function, connect(). This structure and the API are listed below. Note that the sin_zero parameter is unneeded and is thus left blank.
 Collapse  Copy Code
struct sockaddr_in
{
      short      sin_family;
      u_short      sin_port;
      struct      in_addr sin_addr;
      char      sin_zero[8];
};

int PASCAL connect(SOCKET,const struct sockaddr*,int);
I highly recommend that you type in all of the examples in this report by hand, instead of copying and pasting it into your compiler. While I know that doing so will slow you up, I am confident and know from experience that you will learn the process much better that way than if you copy and paste the code.
 Collapse  Copy Code
//CONNECT TO REMOTE HOST (CLIENT APPLICATION)
//Include the needed header files.
//Don't forget to link libws2_32.a to your program as well
#include

SOCKET s; //Socket handle

//CONNECTTOHOST  Connects to a remote host
bool ConnectToHost(int PortNo, char* IPAddress)
{
    //Start up Winsock
    WSADATA wsadata;

    int error = WSAStartup(0x0202, &wsadata);

    //Did something happen?
    if (error)
        return false;

    //Did we get the right Winsock version?
    If (wssadata.wVersion != 0x0202)
    {
        WSACleanup(); //Clean up Winsock
        return false;
    }

    //Fill out the information needed to initialize a socket
    SOCKADDR_IN target; //Socket address information

    target.sin_family = AF_INET; // address family Internet
    target.sin_port = htons (PortNo); //Port to connect on
    target.sin_addr.s_addr = inet_addr (IPAddress); //Target IP

    s = socket (AF_INET, SOCK_STREAM, IPPROTO_TCP); //Create socket
    if (s == INVALID_SOCKET)
    {
        return false; //Couldn't create the socket
    } 

    //Try connecting...

    if (connect(s, (SOCKADDR *)&target, sizeof(target)) == SOCKET_ERROR)
    {
        return false; //Couldn't connect
    }
    else
        return true; //Success
}

//CLOSECONNECTION  shuts down the socket and closes any connection on it
void CloseConnection ()
{
    //Close the socket if it exists
    if (s)
        closesocket(s);

    WSACleanup(); //Clean up Winsock
}
Before you move on, type this code up and give it a try.
Receiving Connections  Acting as a Server
Now that you've had a feel for what it's like to connect to a remote computer, it's time to play the "server" role; so remote computers can connect to you. To do this, we can "listen" on any port and await an incoming connection. As always, we use a few handy API calls:
 Collapse  Copy Code
int PASCAL bind(SOCKET,const struct sockaddr*,int); //bind to a socket
int PASCAL listen(SOCKET,int); //Listen for an incoming connection

//Accept a connection request
SOCKET PASCAL accept(SOCKET,struct sockaddr*,int*);
When you act as the server, you can receive requests for a connection on the port you are listening on: say, for example, a remote computer wants to chat with your computer, it will first ask your server whether or not it wants to establish a connection. In order for a connection to be made, your server must accept() the connection request. Note that the "server" decides whether or not to establish the connection. Finally, both computers are connected and can exchange data.
Although the listen() function is the easiest way to listen on a port and act as the server, it is not the most desirable. You will quickly find out when you attempt it that your program will freeze until an incoming connection is made, because listen() is a "blocking" function  it can only perform one task at a time, and will not return until a connection is pending.
This is definitely a problem, but there are a few solutions for it. First, if you are familiar with multi-threaded applications (note that we are not talking about TCP threads here), then you can place the server code on a separate thread that, when started, will not freeze the entire program and the efficiency of the parent program will thus not be impeded. This is really more of a pain that it needs to be; as you could just replace the listen() function with "asynchronous" sockets. If I've caught your attention with that important-sounding name, you can skip ahead to the next section if you like, but I recommend that you stick with me here and learn the fundamentals. We'll spiff up our code later; but for now, let's focus on the bare essentials.
Before you can even think about listening on a port, you must:
Initialize Winsock (we discussed this before, remember)
Start up a socket and make sure it returns a nonzero value, which signifies success and is the handle to the socket
Fill out the SOCKADDR_IN structure with the necessary data, including the address family, port, and IP address.
Use bind() to bind the socket to a specific IP address (if you specified inet_addr("0.0.0.0") or htonl(INADDR_ANY) as the sin_addr section of SOCKADDR_IN, you can bind to any IP address)
At this point, if all has gone according to plan, you're all set to call listen() and spy to your heart's content.
The first parameter of listen() must be the handle to a socket that you have previously initialized. Of course, whatever port this socket is attached to is the port that you will be listening on. You can then specify, with the next and final parameter, how many remote computers can communicate with your server at the same time. Generally, however, unless you want to exclude all but one or a few connections, we just pass SOMAXCONN (SOcket MAX CONNection) as the final parameter to listen(). If the socket is up and working fine, all should go well, and when a connection request received, listen() will return. This is your clue to call accept(), if you wish to establish a connection.
 Collapse  Copy Code
#include
#include

SOCKET s;
WSADATA w;

//LISTENONPORT  Listens on a specified port for incoming connections
//or data
int ListenOnPort(int portno)
{
    int error = WSAStartup (0x0202, &w);   // Fill in WSA info

    if (error)
    {
        return false; //For some reason we couldn't start Winsock
    }

    if (w.wVersion != 0x0202) //Wrong Winsock version?
    {
        WSACleanup ();
        return false;
    }

    SOCKADDR_IN addr; // The address structure for a TCP socket

    addr.sin_family = AF_INET;      // Address family
    addr.sin_port = htons (portno);   // Assign port to this socket

    //Accept a connection from any IP using INADDR_ANY
    //You could pass inet_addr("0.0.0.0") instead to accomplish the
    //same thing. If you want only to watch for a connection from a
    //specific IP, specify that //instead.
    addr.sin_addr.s_addr = htonl (INADDR_ANY); 

    s = socket (AF_INET, SOCK_STREAM, IPPROTO_TCP); // Create socket

    if (s == INVALID_SOCKET)
    {
        return false; //Don't continue if we couldn't create a //socket!!
    }

    if (bind(s, (LPSOCKADDR)&addr, sizeof(addr)) == SOCKET_ERROR)
    {
       //We couldn't bind (this will happen if you try to bind to the same 
       //socket more than once)
        return false;
    }

    //Now we can start listening (allowing as many connections as possible to 
    //be made at the same time using SOMAXCONN). You could specify any
    //integer value equal to or lesser than SOMAXCONN instead for custom
    //purposes). The function will not //return until a connection request is
    //made
    listen(s, SOMAXCONN);

    //Don't forget to clean up with CloseConnection()!
}
If you compile and run this code, as mentioned before, your program will freeze until a connection request is made. You could cause this connection request by, for example, trying a "telnet" connection. The connection will inevitably fail, or course, because the connection will not be accepted, but you will cause listen() to return and your program will resurrect from the land of the dead. You can try this by typing telnet 127.0.0.1 "port_number" at the MSDOS command prompt (replace "port_number" with the port that your server is listening on).
Asynchronous Sockets
Because using blocking functions such as listen() is so impractical and such a pain, let's go ahead and before we move on discuss "asynchronous" "sockets". I mentioned these earlier on, and promised you I'd show you how they work.
C++ gives us an advantage here that most high-level programming languages do not; namely, because we don't have to go to the extra length of "sub-classing" the parent window before we can use asynchronous sockets. It's already done for us, so all we really have to do is add the handling code into the message handler. This is because asynchronous sockets, as you will see, depend on being able to send your program messages when a connection request is made, data is being received, etc. This enables it to wait silently in the background without disturbing your parent program or impeding productivity, as it only communicates when necessary. There is a relatively small price to pay, too, for it really doesn't take much additional coding. Understanding how it all works might take a little while, but you'll definitely be pleased that you took the time to understand asynchronous sockets. It'll save you a lot of trouble in the long run.
Instead of having to rework and modify all the code that we have written up to this point, making a socket asynchronous simply requires an additional line of code after the listen() function. Of course, your message handler needs to be ready to accept the following messages:
FD_ACCEPT: If your application is acting as the client (i.e., you are attempting to connect to a remote host using connect()), you will receive this message when a connection request is being made. Should you choose to do so, the following message will be sent:
FD_CONNECT: Signifies that a connection has been successfully made
FD_READ: We've got incoming data from the remote computer. We'll learn how to deal with this later on.
FD_CLOSE: The remote host disconnected, so we lost the connection.
These values will be sent in the lParam parameter of your message handler. I'll show you exactly where to put them in a minute; but first, we need to understand the parameters of the API call we'll be using to set our socket to asynchronous mode:
 Collapse  Copy Code
//Switch the socket to a non-blocking asynchronous one
int PASCAL WSAAsyncSelect(SOCKET,HWND,u_int,long);
The first parameter, obviously, asks for a handle to our socket, and the second requires the handle to our parent window. This is necessary so that it send the messages to the correct window! The third parameter, as you can see, accepts an integer value, for which you will specify a unique notification number. When any message is sent to your program's message handler, whatever number you specify here will also be sent. Thus, you would code your message handler to wait for the identification number, and then determine what type of notification has been sent. I know this is confusing, so hopefully a glance at the following source code will shed a little light on the subject:
 Collapse  Copy Code
#define MY_MESSAGE_NOTIFICATION      1048 //Custom notification message

//This is our message handler/window procedure
LRESULT CALLBACK WndProc(HWND hwnd, UINT message, WPARAM wParam, LPARAM lParam)
{
    switch (message) //handle the messages
    {
    case MY_MESSAGE_NOTIFICATION: //Is a message being sent?
        {
            switch (lParam) //If so, which one is it?
            {
            case FD_ACCEPT:
                //Connection request was made
                break;

            case FD_CONNECT:
                //Connection was made successfully
                break;

            case FD_READ:
                //Incoming data; get ready to receive
                break;

            case FD_CLOSE:
                //Lost the connection
                break;
            }
        }
        break;

        //Other normal window messages here

    default: //The message doesn't concern us
        return DefWindowProc(hwnd, message, wParam, lParam);
    }
    break;
}
That's not too bad, is it? Now that our handler is all set, we should append the following line of code to function ListenOnPort(), after listen():
 Collapse  Copy Code
//The socket has been created

//IP address has been bound

//Function listen() has just been called

//Set the socket to non-blocking asynchronous mode
//hwnd is a valid handle to the program's parent window
//Make sure you OR all the needed flags
WSAAsyncSelect (s, hwnd, MY_MESSAGE_NOTIFICATION, (FD_ACCEPT | FD_CONNECT |
     FD_READ | FD_CLOSE);

//And so forth
C:\Documents and Settings\Cam>netstat -an
Active Connections

Proto    Local Address    Foreign Address    State      
TCP     0.0.0.0:135     0.0.0.0:0     LISTENING      
TCP     0.0.0.0:445     0.0.0.0:0     LISTENING      
TCP     0.0.0.0:5225     0.0.0.0:0     LISTENING      
TCP     0.0.0.0:5226     0.0.0.0:0     LISTENING      
TCP     0.0.0.0:8008     0.0.0.0:0     LISTENING      
TCP     127.0.0.1:1025     0.0.0.0:0     LISTENING      
TCP     127.0.0.1:1035     127.0.0.1:5226     ESTABLISHED      
TCP     127.0.0.1:5226     127.0.0.1:1035     ESTABLISHED      
TCP     127.0.0.1:8005     0.0.0.0:0     LISTENING      
UDP     0.0.0.0:445     *:*          
UDP     0.0.0.0:500     *:*          
UDP     0.0.0.0:4500     *:*          
UDP     127.0.0.1:123     *:*          
UDP     127.0.0.1:1031     *:*          
UDP     127.0.0.1:1032     *:*          
UDP     127.0.0.1:1900     *:*       
C:\Documents and Settings\Cam>
If your server is working correctly, you should see under "Local Address" something like, "0.0.0.0:Port#," where Port# is the port that you are listening on, in a LISTENING state. Incidentally, if you forget to use htons() to convert the port number, you might find a new port has been opened, but it will be on a completely different port than what you expected.
Don't worry if it takes you a couple of tries to get everything working right; it happens to all of us. You'll get it with a couple of tries. (Of course, if you try without avail for a couple weeks, burn this report and forget who wrote it!)
Sending and Receiving Data
Up to this section, all you've got for a server is a deaf mute! Which, not surprisingly, does not do you a lot of good in the real world. So, let's take a look at how we can communicate properly and effectively with any computer that decided to chat with us. As always, a few API calls come to the rescue when we're stumped:
 Collapse  Copy Code
//Send text data to a remote computer
int PASCAL send(SOCKET,const char*,int,int);

//Receive incoming text from a remote computer
int PASCAL recv(SOCKET,char*,int,int);

//Advanced functions that allow you to communicate exclusively with a
//certain computer when multiple computers are connected to the same server
int PASCAL sendto(SOCKET,const char*,int,int,const struct   sockaddr*,int);
int PASCAL recvfrom(SOCKET,char*,int,int,struct sockaddr*,int*);
If you're not using an asynchronous server, then you'll have to put the recv() function in a timer function, that constantly checks for incoming data  not so elegant of a solution, to say the least. If, on the other hand, you've done the smart thing and set up an asynchronous server, then all you have to do is put your recv() code inside FD_READ in your message handler. When there's incoming data, you'll be notified. Can't get any easier than that!
When we do detect activity, a buffer must be created to hold it, and then a pointer to the buffer passed to recv(). After the function returns, the text should have been dutifully placed in our buffer just itching to be displayed. Check out the source code:
 Collapse  Copy Code
case FD_READ:
    {
        char buffer[80];
        memset(buffer, 0, sizeof(buffer)); //Clear the buffer

        //Put the incoming text into our buffer
        recv (s, buffer, sizeof(buffer)-1, 0);

        //Do something smart with the text in buffer
        //You could display it in a textbox, or use:

        //MessageBox(hwnd, Buffer, "Captured Text", MB_OK);
    }
    break;
Now that you can receive incoming text from the remote computer or server, all that our server lacks is the ability to reply, or "send" data to the remote computer. This is probably the most simple and self-evident process in Winsock programming, but if you're like me and like to have every step spelled out for you, here's how to use send() correctly:
 Collapse  Copy Code
char *szpText;

//Allocate memory for the text in your Text Edit, retrieve the text,
//(see the source code for this) and then pass a pointer to it

send(s, szpText, len_of_text, 0);
For brevity's sake, the above snippet of code is just a skeleton to give you a general idea of how send() is used. To see the entire code, please download the example source code that comes along with this tutorial.
On a more advanced note, sometimes the simple send() and receive() functions just aren't enough to do what you want. This happens when you have multiple connections at the same time from different sources (remember when we called listen(), we passed SOMAXCONN to allow the maximum number of connections possible), and you need to send data to one particular computer, and not all of them. If you're uncommonly sharp, you may have noticed two extra API below send() and receive() (extra credit if you did!); sendto() and receivefrom().
These two API allow you to communicate with any one remote computer without tipping your hand to everyone else that is connected. There is an extra parameter that accepts a pointer to a sockaddr_in structure in these advanced functions, which you can use to specify the IP address of any remote computer that you want to communicate with exclusively. This is an important skill to know if you are building a full-fledged chat program, or something similar, but beyond giving you the basic idea of how these functions work, I'll let you figure them out on your own. (Don't you hate it when authors say that? Usually it's because we don't have the slightest clue ourselves  but really, it shouldn't take much to implement them if you decide that you need to.)
Some Final Notes
Well, by now you should have a decent understanding of Windows sockets  or a profound hatred of them  but at any rate, if you're looking for a much better explanation than I can give you here, please take a look at the example source code provided with this article. Practice will do much more for you than reading any article.
Additionally, I have found that if you try to copy and paste code, or just compiling someone else's code you found on the Internet, you won't come close to the level of understanding you will gain if you type in all the examples by hand yourself. A big pain, I know! But if you take the time to do it, you'll save yourself a lot of trouble in the long run.
Have fun, and let me know what you think of this article by posting feedback.
This article (not including the accompanying source code) is copyrighted  2006 by the author, and cannot be modified, sold, and redistributed for personal gain without prior explicit permission from him. It is provided free of charge for the benefit of the public. You are allowed, however, to make and distribute as many copies of it as you like, provided that you do not modify the original content in any way. Thanks!
License
===============================================================================================

Inter-Process Communication (IPC) Introduction and Sample Code
By All-In-One Code Framework | 19 Dec 2009
This article will cover general IPC technologies in All-In-One Code Framework. The IPC technologies include Named Pipes, File Mapping, MailSlot, etc.
Download IPC source code - 229.21 KB
Introduction
Inter-Process Communication (IPC) is a set of techniques for the exchange of data among multiple threads in one or more processes. Processes may be running on one or more computers connected by a network. IPC techniques include Named Pipes, File Mapping, Mailslot, Remote Procedure Calls (RPC), etc.
In All-In-One Code Framework, we have already implemented samples (C++ and C#) for Named Pipes, File Mapping, Mail Slot, and Remoting. We are going to add more techniques like: Clickbord, Winsock, etc. You can download the latest code from http://cfx.codeplex.com/.
Background
All-In-One Code Framework (short as AIO) delineates the framework and skeleton of most Microsoft development techniques (e.g., COM, Data Access, IPC) using typical sample codes in different programming languages (e.g., Visual C#, VB.NET, Visual C++).
Using the Code
Find samples by following the steps below:
Download the zip file and unzip it.
Open the folder [Visual Studio 2008].
Open the solution file IPC.sln. You must pre-install Visual Studio 2008 on the machine.
In the Solution Explorer, open the [Process] \ [IPC and RPC] folder.
Samples Structure and Relationship

Named Pipe
Named pipes is a mechanism for one-way or bi-directional inter-process communication between a pipe server and one or more pipe clients in the local machine or across computers in an intranet:
 Collapse  Copy Code
PIPE_ACCESS_INBOUND:

Client (GENERIC_WRITE) ---> Server (GENERIC_READ)


PIPE_ACCESS_OUTBOUND:

Client (GENERIC_READ) <--- Server (GENERIC_WRITE)


PIPE_ACCESS_DUPLEX:

Client (GENERIC_READ or GENERIC_WRITE, or both)
                <--> Server (GENERIC_READ and GENERIC_WRITE)
This sample demonstrates a named pipe server, \\.\pipe\HelloWorld, that supports PIPE_ACCESS_DUPLEX. It first creates such a named pipe, then it listens to the client's connection. When a client is connected, the server attempts to read the client's requests from the pipe and writes a response.
A named pipe client attempts to connect to the pipe server, \\.\pipe\HelloWorld, with the GENERIC_READ and GENERIC_WRITE permissions. The client writes a message to the pipe server and receives its response.
Code Logic
Server-side logic:
Create a named pipe. (CreateNamedPipe)
Wait for the client to connect. (ConnectNamedPipe)
Read client requests from the pipe and write the response. (ReadFile, WriteFile)
Disconnect the pipe, and close the handle. (DisconnectNamedPipe, CloseHandle)
Client-side logic:
Try to open a named pipe. (CreateFile)
Set the read mode and the blocking mode of the specified named pipe. (SetNamedPipeHandleState)
Send a message to the pipe server and receive its response. (WriteFile, ReadFile)
Close the pipe. (CloseHandle)
Code - CreateNamedPipe (C++)
 Collapse  Copy Code
// Create the named pipe.
HANDLE hPipe = CreateNamedPipe(

strPipeName,                      // The unique pipe name. This string must
                                  // have the form of \\.\pipe\pipename
PIPE_ACCESS_DUPLEX,               // The pipe is bi-directional; both
                                  // server and client processes can read
                                  // from and write to the pipe
PIPE_TYPE_MESSAGE |               // Message type pipe
PIPE_READMODE_MESSAGE |           // Message-read mode
PIPE_WAIT,                        // Blocking mode is enabled
PIPE_UNLIMITED_INSTANCES,         // Max. instances

// These two buffer sizes have nothing to do with the buffers that
// are used to read from or write to the messages. The input and
// output buffer sizes are advisory. The actual buffer size reserved
// for each end of the named pipe is either the system default, the
// system minimum or maximum, or the specified size rounded up to the
// next allocation boundary. The buffer size specified should be
// small enough that your process will not run out of nonpaged pool,
// but large enough to accommodate typical requests.

BUFFER_SIZE,                      // Output buffer size in bytes
BUFFER_SIZE,                      // Input buffer size in bytes
NMPWAIT_USE_DEFAULT_WAIT,         // Time-out interval
&sa                               // Security attributes
)
For more code samples, please download AIO source code.
Security Attribute for Named Pipes
If lpSecurityAttributes of CreateNamedPipe is NULL, the named pipe gets a default security descriptor and the handle cannot be inherited. The ACLs in the default security descriptor for a named pipe grants full control to the LocalSystem account, administrators, and the creator owner. They also grant read access to members of the Everyone group and the anonymous account. In other words, with NULL as the security attribute, the named pipe cannot be connected with WRITE permission across the network, or from a local client running as a lower integrity level. Here, we fill the security attributes to grant EVERYONE all access (not just the connect access) to the server. This solves the cross-network and cross-IL issues, but it creates a security hole right there: the clients have WRITE_OWNER access and then the server just loses the control of the pipe object.
Code - Security Attributes (C++)
 Collapse  Copy Code
SECURITY_ATTRIBUTES sa;
sa.lpSecurityDescriptor = (PSECURITY_DESCRIPTOR)malloc(SECURITY_DESCRIPTOR_MIN_LENGTH);
InitializeSecurityDescriptor(sa.lpSecurityDescriptor, SECURITY_DESCRIPTOR_REVISION);
// ACL is set as NULL in order to allow all access to the object.
SetSecurityDescriptorDacl(sa.lpSecurityDescriptor, TRUE, NULL, FALSE);
sa.nLength = sizeof(sa);
sa.bInheritHandle = TRUE;
.NET Named Pipe
.NET supports named pipes in two ways:
P/Invoke the native APIs.
By P/Invoke-ing the native APIs from .NET, we can mimic the code logic in CppNamedPipeServer to create the named pipe server, \\.\pipe\HelloWorld, that supports PIPE_ACCESS_DUPLEX.
PInvokeNativePipeServer first creates such a named pipe, then it listens to the client's connection. When a client is connected, the server attempts to read the client's requests from the pipe and write a response.
System.IO.Pipes namespace
In .NET Framework 3.5, the namespace System.IO.Pipes and a set of classes (e.g., PipeStream, NamedPipeServerStream) are added to the .NET BCL. These classes make the programming of named pipes in .NET much easier and safer than P/Invoke-ing the native APIs directly.
BCLSystemIOPipeServer first creates such a named pipe, then it listens to the client's connection. When a client is connected, the server attempts to read the client's requests from the pipe and write a response.
Code - Create Named Pipe (C#)
 Collapse  Copy Code
// Prepare the security attributes
// Granting everyone the full control of the pipe is just for
// demo purpose, though it creates a security hole.
PipeSecurity pipeSa = new PipeSecurity();
pipeSa.SetAccessRule(new PipeAccessRule("Everyone",
       PipeAccessRights.ReadWrite, AccessControlType.Allow));

// Create the named pipe
pipeServer = new NamedPipeServerStream(
    strPipeName,                    // The unique pipe name.
    PipeDirection.InOut,            // The pipe is bi-directional
    NamedPipeServerStream.MaxAllowedServerInstances,
    PipeTransmissionMode.Message,   // Message type pipe
    PipeOptions.None,               // No additional parameters
    BUFFER_SIZE,                    // Input buffer size
    BUFFER_SIZE,                    // Output buffer size
    pipeSa,                         // Pipe security attributes
    HandleInheritability.None       // Not inheritable
);
File Mapping
File mapping is a mechanism for one-way or bi-directional inter-process communication among two or more processes in the local machine. To share a file or memory, all of the processes must use the name or the handle of the same file mapping object.
To share a file, the first process creates or opens a file by using the CreateFile function. Next, it creates a file mapping object by using the CreateFileMapping function, specifying the file handle and a name for the file mapping object. The names of events, semaphores, mutexes, waitable timers, jobs, and file mapping objects share the same namespace. Therefore, the CreateFileMapping and OpenFileMapping functions fail if they specify a name that is in use by an object of another type.
To share memory that is not associated with a file, a process must use the CreateFileMapping function and specify INVALID_HANDLE_VALUE as the hFile parameter instead of an existing file handle. The corresponding file mapping object accesses memory backed by the system paging file. You must specify a size greater than zero when you specify an hFile of INVALID_HANDLE_VALUE in a call to CreateFileMapping.
Processes that share files or memory must create file views by using the MapViewOfFile or MapViewOfFileEx functions. They must coordinate their access using semaphores, mutexes, events, or some other mutual exclusion techniques.
This example demonstrates a named shared memory server, Local\HelloWorld, that creates the file mapping object with INVALID_HANDLE_VALUE. By using the PAGE_READWRITE flag, the process has read/write permission to the memory through any file view that is created.
The named shared memory client, Local\HelloWorld, can access the string written to the shared memory by the first process. The console displays the message "Message from the first process" that is read from the file mapping created by the first process.
Code Logic
Service-side logic:
Create a file mapping. (CreateFileMapping)
Map the view of the file mapping into the address space of the current process. (MapViewOfFile)
Write message to the file view. (CopyMemory)
Unmap the file view and close the file mapping objects. (UnmapViewOfFile, CloseHandle)
Client-side logic:
Try to open a named file mapping. (OpenFileMapping)
Maps the view of the file mapping into the address space of the current process. (MapViewOfFile)
Read message from the view of the shared memory.
Unmap the file view and close the file mapping objects. (UnmapViewOfFile, CloseHandle)
Code - CreateFileMapping (C++)
 Collapse  Copy Code
// In terminal services: The name can have a "Global\" or "Local\" prefix
// to explicitly create the object in the global or session namespace.
// The remainder of the name can contain any character except the 
// backslash character (\). For details, please refer to:
// http://msdn.microsoft.com/en-us/library/aa366537.aspx
TCHAR szMapFileName[] = _T("Local\\HelloWorld");

// Create the file mapping object
HANDLE hMapFile = CreateFileMapping(
       INVALID_HANDLE_VALUE,      // Use paging file instead of existing file.
                                  // Pass file handle to share in a file.

       NULL,                      // Default security
       PAGE_READWRITE,            // Read/write access
       0,                         // Max. object size
       BUFFER_SIZE,               // Buffer size 
       szMapFileName              // Name of mapping object
);
.NET only supports P/Invoke native APIs currently. By P/Invoke, .NET can simulate similar behaviors as native code.
Sample Code 4 (C# - P/Invoke)
 Collapse  Copy Code
///


/// Creates or opens a named or unnamed file mapping object for
/// a specified file.
///

///
A handle to the file from which to create
/// a file mapping object.
///
A pointer to a SECURITY_ATTRIBUTES
/// structure that determines whether a returned handle can be
/// inherited by child processes.
///
Specifies the page protection of the
/// file mapping object. All mapped views of the object must be
/// compatible with this protection.
///
The high-order DWORD of the
/// maximum size of the file mapping object.
///
The low-order DWORD of the
/// maximum size of the file mapping object.
///
The name of the file mapping object.
///
/// If the function succeeds, the return value is a
/// handle to the newly created file mapping object.

[DllImport("Kernel32.dll", SetLastError = true)]
public static extern IntPtr CreateFileMapping(
    IntPtr hFile,                   // Handle to the file
    IntPtr lpAttributes,            // Security Attributes
    FileProtection flProtect,       // File protection
    uint dwMaximumSizeHigh,         // High-order DWORD of size
    uint dwMaximumSizeLow,          // Low-order DWORD of size
    string lpName                   // File mapping object name
);
Mailslot
Mailslot is a mechanism for one-way inter-process communication in the local machine or across computers in the intranet. Any client can store messages in a mailslot. The creator of the slot, i.e., the server, retrieves the messages that are stored there:
 Collapse  Copy Code
Client (GENERIC_WRITE) ---> Server (GENERIC_READ)
This sample demonstrates a mailslot server, \\.\mailslot\HelloWorld. It first creates such a mailslot, then it reads the new messages in the slot every five seconds. Then, a mailslot client connects and writes to the mailslot \\.\mailslot\HelloWorld.
Code Logic
Server-side logic:
Create a mailslot. (CreateMailslot)
Check messages in the mailslot. (ReadMailslot)
Check for the number of messages in the mailslot. (GetMailslotInfo)
Retrieve the messages one by one from the mailslot. While reading, update the number of messages that are left in the mailslot. (ReadFile, GetMailslotInfo)
Close the handle of the mailslot instance. (CloseHandle)
Client-side logic:
Open the mailslot. (CreateFile)
Write messages to the mailslot. (WriteMailslot, WriteFile)
Close the slot. (CloseHandle)
Code - GetMailslotInfo (C++)
 Collapse  Copy Code
/////////////////////////////////////////////////////////////////////////
// Check for the number of messages in the mailslot.
// 
bResult = GetMailslotInfo(
        hMailslot,                    // Handle of the mailslot
        NULL,                         // No maximum message size
        &cbMessageBytes,              // Size of next message
        &cMessages,                   // Number of messages
        NULL);                        // No read time-out
Code - CreateMailslot (C# - P/Invoke)
 Collapse  Copy Code
///

/// Creates an instance of a mailslot and returns a handle for subsequent
/// operations.
///

///
mailslot name
///
The maximum size of a single message
///
///
The time a read operation can wait for a
/// message
///
Security attributes
/// If the function succeeds, the return value is a handle to
/// the server end of a mailslot instance.

[DllImport("kernel32.dll", SetLastError = true)]
public static extern IntPtr CreateMailslot(
    string lpName,              // Mailslot name
    uint nMaxMessageSize,       // Max size of a single message in bytes
    int lReadTimeout,           // Timeout of a read operation
    IntPtr lpSecurityAttributes // Security attributes
);
Remoting
.NET Remoting is a mechanism for one-way inter-process communication and RPC between .NET applications in the local machine or across computers in the intranet and internet.
.NET Remoting allows an application to make a remotable object available across remoting boundaries, which includes different appdomains, processes, or even different computers connected by a network. .NET Remoting makes a reference of a remotable object available to a client application, which then instantiates and uses a remotable object as if it were a local object. However, the actual code execution happens at the server-side. All requests to the remotable object are proxied by the .NET Remoting runtime over Channel objects that encapsulate the actual transport mode, including TCP streams, HTTP streams, and named pipes. As a result, by instantiating proper Channel objects, a .NET Remoting application can be made to support different communication protocols without recompiling the application. The runtime itself manages the act of serialization and marshalling of objects across the client and server appdomains.
Code - Create and Register a Channel (C#)
 Collapse  Copy Code
/////////////////////////////////////////////////////////////////////
// Create and register a channel (TCP channel in this example) that
// is used to transport messages across the remoting boundary.
//
// Properties of the channel
IDictionary props = new Hashtable();
props["port"] = 6100;   // Port of the TCP channel
props["typeFilterLevel"] = TypeFilterLevel.Full;
// Formatters of the messages for delivery
BinaryClientFormatterSinkProvider clientProvider = null;
BinaryServerFormatterSinkProvider serverProvider =
              new BinaryServerFormatterSinkProvider();
serverProvider.TypeFilterLevel = TypeFilterLevel.Full;

// Create a TCP channel
TcpChannel tcpChannel = new TcpChannel(props, clientProvider, serverProvider);

// Register the TCP channel
ChannelServices.RegisterChannel(tcpChannel, true);
Code - Register Remotable Types (VB.NET)
 Collapse  Copy Code
'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''
' Register the remotable types on the service end as
' server-activated types (aka well-known types) or client-activated
' types.
' Register RemotingShared.SingleCallObject as a SingleCall server-
' activated type.
RemotingConfiguration.RegisterWellKnownServiceType(GetType(RemotingShared.SingleCallObject), _
                      "SingleCallService", WellKnownObjectMode.SingleCall)
' Register RemotingShared.SingletonObject as a Singleton server-
' activated type.
RemotingConfiguration.RegisterWellKnownServiceType(GetType(RemotingShared.SingletonObject), _
                      "SingletonService", WellKnownObjectMode.Singleton)
' Register RemotingShared.ClientActivatedObject as a client-
' activated type.
RemotingConfiguration.ApplicationName = "RemotingService"
RemotingConfiguration.RegisterActivatedServiceType(_
        GetType(Global.RemotingShared.ClientActivatedObject))
Points of Interest
In the pilot phase of the AIO project, we focus on five techniques: COM, Library, IPC, Office, and Data Access. There has been 42 code examples in the project. The collection currently grows at a rate of seven examples per week.

=========================================================================================================

A critical section object provides synchronization similar to that provided by a mutex object, except that a critical section can be used only by the threads of a single process. Event, mutex, and semaphore objects can also be used in a single-process application, but critical section objects provide a slightly faster, more efficient mechanism for mutual-exclusion synchronization (a processor-specific test and set instruction). Like a mutex object, a critical section object can be owned by only one thread at a time, which makes it useful for protecting a shared resource from simultaneous access. Unlike a mutex object, there is no way to tell whether a critical section has been abandoned.
Starting with Windows Server 2003 with Service Pack 1 (SP1), threads waiting on a critical section do not acquire the critical section on a first-come, first-serve basis. This change increases performance significantly for most code. However, some applications depend on first-in, first-out (FIFO) ordering and may perform poorly or not at all on current versions of Windows (for example, applications that have been using critical sections as a rate-limiter). To ensure that your code continues to work correctly, you may need to add an additional level of synchronization. For example, suppose you have a producer thread and a consumer thread that are using a critical section object to synchronize their work. Create two event objects, one for each thread to use to signal that it is ready for the other thread to proceed. The consumer thread will wait for the producer to signal its event before entering the critical section, and the producer thread will wait for the consumer thread to signal its event before entering the critical section. After each thread leaves the critical section, it signals its event to release the other thread.
Windows Server 2003 and Windows XP/2000: Threads that are waiting on a critical section are added to a wait queue; they are woken and generally acquire the critical section in the order in which they were added to the queue. However, if threads are added to this queue at a fast enough rate, performance can be degraded because of the time it takes to awaken each waiting thread.
The process is responsible for allocating the memory used by a critical section. Typically, this is done by simply declaring a variable of type CRITICAL_SECTION. Before the threads of the process can use it, initialize the critical section by using the InitializeCriticalSection or InitializeCriticalSectionAndSpinCount function.
A thread uses the EnterCriticalSection or TryEnterCriticalSection function to request ownership of a critical section. It uses the LeaveCriticalSection function to release ownership of a critical section. If the critical section object is currently owned by another thread, EnterCriticalSection waits indefinitely for ownership. In contrast, when a mutex object is used for mutual exclusion, the wait functions accept a specified time-out interval. The TryEnterCriticalSection function attempts to enter a critical section without blocking the calling thread.
When a thread owns a critical section, it can make additional calls to EnterCriticalSection or TryEnterCriticalSection without blocking its execution. This prevents a thread from deadlocking itself while waiting for a critical section that it already owns. To release its ownership, the thread must call LeaveCriticalSection one time for each time that it entered the critical section. There is no guarantee about the order in which waiting threads will acquire ownership of the critical section.
A thread uses the InitializeCriticalSectionAndSpinCount or SetCriticalSectionSpinCount function to specify a spin count for the critical section object. Spinning means that when a thread tries to acquire a critical section that is locked, the thread enters a loop, checks to see if the lock is released, and if the lock is not released, the thread goes to sleep. On single-processor systems, the spin count is ignored and the critical section spin count is set to 0 (zero). On multiprocessor systems, if the critical section is unavailable, the calling thread spins dwSpinCount times before performing a wait operation on a semaphore that is associated with the critical section. If the critical section becomes free during the spin operation, the calling thread avoids the wait operation.
Any thread of the process can use the DeleteCriticalSection function to release the system resources that are allocated when the critical section object is initialized. After this function is called, the critical section object cannot be used for synchronization.
When a critical section object is owned, the only other threads affected are the threads that are waiting for ownership in a call to EnterCriticalSection. Threads that are not waiting are free to continue running.
======================================================================================================
Synchronization in Multithreaded Applications with MFC
By Arman Z. Sahakyan | 19 Mar 2007
Introduces synchronization concepts and practices in multithreaded applications
Download source files - 47.7 KB

Introduction
This article discusses the basic synchronization concepts and practices that are supposed to be useful for beginners to do multithreaded programming. By saying beginner, I don't mean those that are beginners in learning C++ language, but the people that are somewhat new in multithreaded programming. The main concentration of this article is on synchronization techniques. Thus this article is like a tutorial on synchronization.
The General View
During their execution, threads, more or less, are interoperating with each other. This interoperation may have various forms and may be of various kinds. For example, a thread, after performing the task it is assigned to, informs another thread about it. Then the second thread whose job is a logical continuation of the first thread starts operating.
All the forms of interoperations might be described by the term synchronization which can be supported in several ways. Most usable ones are those whose primary aim is to support synchronization itself. The following objects are intended to support the synchronization (this is not a complete list):
Semaphores
Mutexes
Critical Sections
Events
Each of these objects has a different special purpose and usage but the general purpose is to support synchronization. I will introduce them to you through this article later. There are other objects that can be used as synchronization mediums such as Process and Thread objects. Their usage enables a programmer to decide, for example, if a given process or thread has finished its execution or not.
To use the Process and Thread objects for synchronization purposes, we are supposed to use wait-functions. Before getting to learn these functions, you should learn a key concept, that is, any kernel object that can be used as a synchronization object can be in one of the two states; signaled state and nonsignaled state. Except for critical sections, all synchronization objects can be in either of these two states. For example, for Process and Thread objects, the nonsignaled state is encountered when they start their execution and the signaled state is encountered when they finish their execution. To decide whether a given process or thread has finished, we should find out whether their representative objects are in signaled state; to do that, we should turn to the wait-functions.
Wait-functions
The following function is the simplest wait-function amongst the other wait-functions. It has the following declaration format:
 Collapse  Copy Code
DWORD WaitForSingleObject
(
  HANDLE hHandle,
  DWORD dwMilliseconds
);
The parameter hHandle takes the descriptor of an object whose signaled or nonsignaled state is going to be examined. The parameter dwMilliseconds takes the time that the calling thread should wait until the examining object enters the signaled state. As soon as the object is signaled or the given time interval expires, the function returns the control to the caller thread. If dwMilliseconds takes INIFINITE value (-1), the thread will wait until the object becomes signaled. If it doesn't become signaled, the thread will wait forever.
For example, the following call checks whether a process [identified by hProcess descriptor] is in execution or not:
 Collapse  Copy Code
DWORD dw = WaitForSingleObject(hProcess, 0);
switch (dw)
{
   case WAIT_OBJECT_0:
      // the process has exited
      break;

   case WAIT_TIMEOUT:
      // the process is still executing
      break;

   case WAIT_FAILED:
      // failure
      break;
}
As you notice, we passed 0 to the function's dwMilliseconds parameter in which case the function instantly checks the object's state [signaled or nonsignaled] and immediately returns the control. If the object is signaled, the function returns WAIT_OBJECT_0. If it is nonsignaled - WAIT_TIMEOUT is returned. In case of failure, WAIT_FAILED is returned (a failure may occur when an invalid descriptor is passed to the function).
Next wait-function is similar to the previous one except that it takes a list of descriptors and waits until either one of them or all of them become signaled:
 Collapse  Copy Code
DWORD WaitForMultipleObjects
(
  DWORD nCount,
  CONST HANDLE *lpHandles,
  BOOL fWaitAll,
  DWORD dwMilliseconds
);
The parameter nCount takes the number of descriptors to be examined. The parameter lpHandles should point an array of descriptors. If the parameter fWaitAll is TRUE, the function will wait until all the objects become signaled. If it is FALSE, the function returns even if a single object becomes signaled [no matter what the others are]. dwMilliseconds is the same as in the previous function.
For example, the following code decides which process will exit first from the list of given HANDLEs:
 Collapse  Copy Code
HANDLE h[3];
h[0] = hThread1;
h[1] = hThread2;
h[2] = hThread3;

DWORD dw = WaitForMultipleObjects(3, h, FALSE, 5000);
switch (dw)
{
   case WAIT_FAILED:
      // failure
      break;

   case WAIT_TIMEOUT:
      // no processes exited during 5000ms
      break;

   case WAIT_OBJECT_0 + 0:
      // a process with h[0] descriptor has exited
      break;

   case WAIT_OBJECT_0 + 1:
      // a process with h[1] descriptor has exited
      break;

   case WAIT_OBJECT_0 + 2:
      // a process with h[2] descriptor has exited
      break;
}
As we see, the function can return different values which show the reason the function returned. You already know the meaning of the first two values. Next values are returned by the following logic; WAIT_OBJECT_0 + index is returned which shows that the object from the array of HANDLEs whose index is index, has got signaled. If fWaitAll parameter is TRUE, WAIT_OBJECT_0 will be returned [if all the objects become signaled].
A thread, if it calls a wait-function, enters the kernel mode from the user mode. This fact is both bad and good. It is bad because to enter the kernel mode, approximately 1000 processor cycles are required which may be too expensive in a concrete situation. The good point is that after entering the kernel mode, no processor usage is needed; the thread is asleep.
Let's turn to MFC and see what it can do for us. There are two classes that encapsulate calls to wait-functions; CSingleLock and CMultiLock. We will see their usage later in this article.

Synchronization object    Equivalent C++ class      
Events    CEvent      
Critical sections    CCriticalSection      
Mutexes    CMutex      
Semaphores    CSemaphore   
Each of these classes inherits a single class - CSyncObject whose most usable member is the overloaded HANDLE operator that returns the underlying descriptor of a given synchronization object. All these classes are declared in include file.
Events
Generally, events are used in cases when a thread [or threads] is supposed to start doing its job after a specified action has occurred. For example, a thread might wait until the necessary data is gathered and then start saving them in the hard drive. There are two kinds of events; manual-reset and auto-reset. By using an event we simply can notify another thread that a specified action has occurred. With a first kind of event, that is manual-reset, a thread can notify more than one thread about a specified action. But with a second kind of event, that is auto-reset, only one can be notified. In MFC, there is CEvent class that encapsulates the event object (in terms of Windows, it is represented by an HANDLE value). The constructor of CEvent allows us to create both manual-reset and auto-reset events. By default, the second kind of event is created. To notify the waiting threads, we should use CEvent::SetEvent method, this means that this kind of call will make the event enter the signaled state. If the event is manual-reset, then it will stay in signaled state until a corresponding CEvent::ResetEvent call is invoked which will make the event enter the nonsignaled state. This is the feature that allows a thread to notify more than one thread by a single SetEvent call. If the event is auto-reset, then only one thread from all waiting threads will be able to receive the notification. After it is received by a thread, the event will automatically enter the nonsignaled state. The following two examples will illustrate these thoughts. The first example:
 Collapse  Copy Code
// create an auto-reset event
CEvent g_eventStart;

UINT ThreadProc1(LPVOID pParam)
{
    ::WaitForSingleObject(g_eventStart, INFINITE);

        ...

    return 0;
}

UINT ThreadProc2(LPVOID pParam)
{
    ::WaitForSingleObject(g_eventStart, INFINITE);

        ...

    return 0;
}
In this code, a global CEvent object is created of auto-reset type. In addition, there are two working threads which are waiting for that event in order to start their job. As soon as a third thread calls SetEvent for that object, one and only one thread from these two threads (note that nobody can say exactly which one) will receive the notification, and afterwards the event will enter the nonsignaled state which will not allow a second thread to catch the event. The code, though not very useful, illustrates how an auto-reset event works. Let's look at the second example:
 Collapse  Copy Code
// create a manual-reset event
CEvent g_eventStart(FALSE, TRUE);

UINT ThreadProc1(LPVOID pParam)
{
    ::WaitForSingleObject(g_eventStart, INFINITE);

        ...

    return 0;
}

UINT ThreadProc2(LPVOID pParam)
{
    ::WaitForSingleObject(g_eventStart, INFINITE);

        ...

    return 0;
}
This code differs from the previous one by only the CEvent constructor's parameters. But in sense of functionality, there is a principal difference in the way that the two threads may work. If a third thread calls SetEvent method for this object, then it will be possible to guarantee that the two threads will start working at the same (almost same) time. This is because a manual-reset event, after entering the signaled state, will not enter the nonsignaled state until a corresponding ResetEvent call is done.
Yet another method for working with events - CEvent::PulseEvent. This method first makes the event enter the signaled state and then makes it enter back into the nonsignaled state. If the event is of manual-reset type, the event enters the signaled state then all the waiting threads are getting notified, and then it enters the nonsignaled state. If the event is of auto-reset type, then only one thread will get notified even if there are many threads waiting. If no thread is waiting, the call to ResetEvent will do nothing.
Example - WorkerThreads
In this example I will show how to create worker threads and how to destroy them properly. Here we define a controlling function which is used by all threads. Every time we click the view, one thread is created. All the created threads use the mentioned controlling function which will draw a moving ellipse in the view's client area. Here a manual-reset event is used which informs all the working threads about their death. Besides, we will see how to make the primary thread wait until all the worker threads leave the scene.

All the ellipses are traversing in the client area and are not leaving its boundaries
You should have an SDI application open. Assume the project name is WorkerThreads.
Let's have a WM_LBUTTONDOWN message handler for launching our threads.
Declare the controlling function. A controlling function may be declared in any file; the point is that it should have global access. Assume we have a Threads.h/Threads.cpp file in which the controlling function is declared/defined as follows:
 Collapse  Copy Code
// Threads.h
#pragma once

struct THREADINFO
{
    HWND hWnd;
    POINT point;
};


UINT ThreadDraw(PVOID pParam);
 Collapse  Copy Code
// Threads.cpp
extern CEvent g_eventEnd;

UINT ThreadDraw(PVOID pParam)
{
    static int snCount = 0;
    snCount ++;
    TRACE("- ThreadDraw %d: started...\n", snCount);

    THREADINFO *pInfo = reinterpret_cast (pParam);

    CWnd *pWnd = CWnd::FromHandle(pInfo->hWnd);

    CClientDC dc(pWnd);

    int x = pInfo->point.x;
    int y = pInfo->point.y;

    srand((UINT)time(NULL));
    CRect rectEllipse(x - 25, y - 25, x + 25, y + 25);

    CSize sizeOffset(1, 1);

    CBrush brush(RGB(rand()% 256, rand()% 256, rand()% 256));
    CBrush *pOld = dc.SelectObject(&brush);
    while (WAIT_TIMEOUT == ::WaitForSingleObject(g_eventEnd, 0))
    {
        CRect rectClient;
        pWnd->GetClientRect(rectClient);

        if (rectEllipse.left < rectClient.left ||
            rectEllipse.right > rectClient.right)
            sizeOffset.cx *= -1;

        if (rectEllipse.top < rectClient.top ||
            rectEllipse.bottom > rectClient.bottom)
            sizeOffset.cy *= -1;

        dc.FillRect(rectEllipse, CBrush::FromHandle
            ((HBRUSH)GetStockObject(WHITE_BRUSH)));

        rectEllipse.OffsetRect(sizeOffset);

        dc.Ellipse(rectEllipse);
        Sleep(25);
    }

    dc.SelectObject(pOld);

    delete pInfo;

    TRACE("- ThreadDraw %d: exiting.\n", snCount --);
    return 0;
}
This function takes a single object via its PVOID parameter, that is, a struct whose fields are the handle of the view, in order to be able to draw on its client area, and the point from where to start the cycle. Note that we should pass the very handle and not a CWnd pointer to let each thread create a temporary C++ object over the handle and use it. Otherwise all of them would share a single C++ object which is a potential danger in sense of safe multithreaded programming. In its core, the controlling function renders a moving circle in the client area of the view. Besides, include file in "StdAfx.h" file to make CEvent visible.
Another key point here is that we prepare a structure THREADINFO to pass to the thread. This technique is mostly used when there is a need to pass more than one value to a thread (or get them from a thread). We need to pass the window handle of the view and the initial point of the cycle that is going to be created. Each thread deletes the THREADINFO object passed to itself. Beware that this deletion is done in regard to our convention; that is, the primary thread should reserve a heap memory for a THREADINFO object and the targeting thread should delete it. The idea is that the primary thread doesn't know when to do deletion as the object will have been owned by the secondary thread itself.
Declare an array variable in CWorkerThreadView class. We should store the pointer to CWinThread objects to use them later:
 Collapse  Copy Code
private:
    CArray m_ThreadArray;
Besides, include ; file in "StdAfx.h" file to make CArray visible.
Change the file WorkerThreadsView.cpp. First define a global CEvent manual-reset variable somewhere at the beginning of the file:
 Collapse  Copy Code
// manual-reset event
CEvent g_eventEnd(FALSE, TRUE);
Now add code to the WM_LBUTTONDOWN message handler:
 Collapse  Copy Code
void CWorkerThreadsView::OnLButtonDown()
{
    THREADINFO *pInfo = new THREADINFO;
    pInfo->hWnd = GetSafeHwnd();
    pInfo->point = point;

    CWinThread *pThread = AfxBeginThread(ThreadDraw,
    (PVOID) pInfo, THREAD_PRIORITY_NORMAL, 0, CREATE_SUSPENDED);
    pThread->m_bAutoDelete = FALSE;
    pThread->ResumeThread();
    m_ThreadArray.Add(pThread);
}
Be aware that we exclude the auto-deletion property of a newly created thread but instead we store the pointer to that CWinThread object in our array. Note that we create an instance of THREADINFO in the heap and let the thread delete it after it finishes working with the structure. To make ThreadDraw and THREADINFO visible in WorkerThreadsView.cpp file, include "Threads.h" file.
Take care to destroy the threads properly. As all threads are related to the view object (they are working with it), it will be reasonable to destroy them in the view's WM_DESTROY message handler:
 Collapse  Copy Code
void CWorkerThreadsView::OnDestroy()
{
    CView::OnDestroy();

    // TODO: Add your message handler code here
    g_eventEnd.SetEvent();
    for (int j = 0; j < m_ThreadArray.GetSize(); j ++)
    {
    ::WaitForSingleObject(m_ThreadArray[j]->m_hThread, INFINITE);
    delete m_ThreadArray[j];
    }
}
This function first makes the event become signaled to notify the working threads about their death, and then it uses WaitForSingleObject to make the primary thread wait for each worker thread until the later is destroyed fully. To do this we should have a valid CWinThread pointer even when the corresponding thread is destroyed; that is why we removed the auto-deletion property from CWinThread objects in the previous step. As soon as a worker thread exits, the second line of the for loop destroys the corresponding C++ object. Note that in each iteration a call to WaitForSingleObject is done which simply results in entering the kernel mode from the user mode. For example, for 10 iterations there will be wasted ~10000 processor cycles. To overcome this moment, we might use WaitForMultipleObjects. In this case we will need a C-array of thread descriptors. So, the above for loop could be replaced with the following code:
 Collapse  Copy Code
//second way (comment in 'for' loop above)
int nSize = m_ThreadArray.GetSize();
HANDLE *p = new HANDLE[nSize];

for (int j = 0; j < nSize; j ++)
{
    p[j] = m_ThreadArray[j]->m_hThread;
}

::WaitForMultipleObjects(nSize, p, TRUE, INFINITE);

for (j = 0; j < nSize; j ++)
{
    delete m_ThreadArray[j];
}
delete [] p;
As the previous code executes only once and in addition at the end of the application, such improvements could hardly be valued much.
This is all. You can test it.
Critical Sections
Unlike other synchronization objects, critical sections are working in the user mode unless there is a need to enter the kernel mode. If a thread tries to run a code that is caught be a critical section, it first does a spin blocking and after a specified amount of time, it enters the kernel mode to wait for the critical section. Actually, a critical section consists of a spin counter and a semaphore; the former is for the user mode waiting, and the later is for the kernel mode waiting (sleeping). In Win32 API, there is a CRITICAL_SECTION structure that represents critical section objects. In MFC, there is a class named CCriticalSection. Conceptually, a critical section is a sector of source code that is needed in integrated execution, that is, during the execution of that part of the code it should be guaranteed that the execution will not be interrupted by another thread. Such sectors of code may be required in cases when there is a need to grant a single thread the monopoly of using a shared resource. A simple case is using global variables by more than one thread. For example:
 Collapse  Copy Code
int g_nVariable = 0;

UINT Thread_First(LPVOID pParam)
{
    if (g_nVariable < 100)
    {
       ...
    }
    return 0;
}

UINT Thread_Second(LPVOID pParam)
{
    g_nVariable += 50;
    ...
    return 0;
}
This is not a safe code as no thread has a monopoly access to g_nVariable variable. Consider the following scenario; assume the initial value of g_nVariable is 80, the control is passed to the first thread which sees that the value of g_nVariable is less than 100 and thus it tries to execute the block under the condition. But at that time the processor switches to the second thread which adds 50 to the variable, so it becomes greater than 100. Afterwards, the processor switches back to the first thread and continues executing the if block. Guess what? Inside the if block the value of g_nVariable is greater than 100 though it is supposed to be less than 100. To cover this gap, we may use a critical section like so:
 Collapse  Copy Code
CCriticalSection g_cs;
int g_nVariable = 0;

UINT Thread_First(LPVOID pParam)
{
    g_cs.Lock();
    if (g_nVariable < 100)
    {
       ...
    }
    g_cs.Unlock();
    return 0;
}

UINT Thread_Second(LPVOID pParam)
{
    g_cs.Lock();
    g_nVariable += 20;
    g_cs.Unlock();
    ...
    return 0;
}
Here, two methods of CCriticalSection class are used. A call to Lock function informs the system that the execution of underlying code should not be interrupted until the same thread makes a call to Unlock function. In response to this call, the system first checks whether that code is not captured by another thread with the same critical section object. If it is, the thread waits until the capturing thread releases the critical section and than captures it itself.
If there are more than two shared resources to be protected, it would be a good practice to use a separate critical section per resource. Do not forget to match Unlock to each Lock. When using critical sections, one should be careful not to prepare mutual blocking situations for collaborating threads. This means that a thread could wait for a critical section to be freed by another thread, which in turn, waits for a critical section that is captured by the first thread. It is obvious that in such a case the two threads will wait forever.
There is a practice to embed critical sections into C++ classes and thus make them thread-safe. This kind of trick might be needed when the objects of a specific class are supposed to be used by more than one thread simultaneously. The big picture looks like this:
 Collapse  Copy Code
class CSomeClass
{
    CCriticalSection m_cs;
    int m_nData1;
    int m_nData2;

public:
    void SetData(int nData1, int nData2)
    {
        m_cs.Lock();
        m_nData1 = Function(nData1);
        m_nData2 = Function(nData2);
        m_cs.Unlock();
    }

    int GetResult()
    {
        m_cs.Lock();
        int nResult = Function(m_nData1, m_nData2);
        m_cs.Unlock();
        return nResult;
    }
};
It's possible that at the same time two or more threads call SetData and/or GetData methods for the same object of CSomeClass type. Therefore, by wrapping the content of those methods, we will prevent the data from getting distorted during those calls.
Mutexes
Mutexes, like critical sections, are designated to protect shared resources from simultaneous accesses. Mutexes are implemented inside the kernel and thus they enter the kernel mode to operate. A mutex can perform synchronization not only between different threads but also between different processes. Such a mutex should have a unique name to be recognized by another process (such mutexes are called named mutexes). MFC represents CMutex class for working with mutexes. A mutex might be used in this way:
 Collapse  Copy Code
CSingleLock singleLock(&m_Mutex);
singleLock.Lock();  // try to capture the shared resource
if (singleLock.IsLocked())  // we did it
{
    // use the shared resource ...

    // After we done, let other threads use the resource
    singleLock.Unlock();
}
Or the same by Win32 API functions:
 Collapse  Copy Code
// try to capture the shared resource
::WaitForSingleObject(m_Mutex, INFINITE);

// use the shared resource ...

// After we done, let other threads use the resource
::ReleaseMutex(m_Mutex);
A mutex can also be used to limit the number of running instances by a single one. The following code might be placed at the beginning of InitInstance method (or WinMain):
 Collapse  Copy Code
HANDLE h = CreateMutex(NULL, FALSE, "MutexUniqueName");
if (GetLastError() == ERROR_ALREADY_EXISTS)
{
    AfxMessageBox("An instance is already running.");
    return(0);
}
To guarantee a globally unique name, use a GUID instead.
Semaphores
In order to limit the number of threads that use shared resources we should use semaphores. A semaphore is a kernel object. It stores a counter variable to keep track of the number of threads that are using the shared resource. For example, the following code creates a semaphore by the MFC CSemaphore class which could be used to guarantee that only 5 threads at a maximum would be able to use the shared resource in a given time period (this fact is indicated by the first parameter of the constructor). It is supposed that no threads have captured the resource initially (the second parameter):
 Collapse  Copy Code
CSemaphore g_Sem(5, 5);
As soon as a thread gets access to the shared resource, the counter variable of the semaphore is decremented by one. If it becomes equal to zero, then any further attempt to use the resource will be rejected until at least one thread that has captured the resource leaves it (in other words, releases the semaphore). We may turn to CSingleLock and/or CMultiLock classes to wait/capture/release a semaphore. We could also use the API functions as shown below:
 Collapse  Copy Code
// Try to use the shared resource
::WaitForSingleObject(g_Sem, INFINITE);
// Now the user's counter of the semaphore has decremented by one

//... Use the shared resource ...

// After we done, let other threads use the resource
::ReleaseSemaphore(g_Sem, 1, NULL);
// Now the user's counter of the semaphore has incremented by one
Communication between Secondary Threads and the Primary Thread
If a primary thread wants to inform a secondary thread about some action, it is convenient to use an event object. But doing vice-versa will be inefficient and not convenient for users since stopping the primary thread to wait for an event may (and mostly does) slow down the application. In this case it would be correct to use user-defined messages to interact with the primary thread. Such a message should be addressed to a specific window which means that the descriptor of such a window should be visible to callers (secondary threads).
To create a user-defined message, we firstly should define an identifier for that message (more correctly - define the message itself). Supposedly, such an identifier should be visible to both the primary thread and secondary threads:
 Collapse  Copy Code
#define WM_MYMSG WM_USER + 1
WM_USER+n messages are supposed to be unique through a window class but not through the application. A more secure [in sense of its uniqueness] way is to use WM_APP+n messages like so:
 Collapse  Copy Code
#define WM_MYMSG WM_APP + 1
Next, a handler method should be declared for the message inside the window class declaration to which (window) the message is going to be addressed:
 Collapse  Copy Code
afx_msg LRESULT OnMyMessage(WPARAM , LPARAM );
Of course, there should be some definition of the method:
 Collapse  Copy Code
LRESULT CMyWnd::OnMyMessage(WPARAM wParam, LPARAM lParam)
{
    // A notification got
    // Do something ...
    return 0;
}
And finally, to assign the handler to the message identifier, ON_MESSAGE macro should be used inside BEGIN_MESSAGE_MAP and END_MESSAGE_MAP pairs:
 Collapse  Copy Code
BEGIN_MESSAGE_MAP(CMyWnd, CWnd)
    ...

    ON_MESSAGE(WM_MYMSG, OnMyMessage)
END_MESSAGE_MAP()
Now a secondary thread having a window handle [that lives in the primary thread], can notify it by the user-defined message as follows:
 Collapse  Copy Code
UINT ThreadProc(LPVOID pParam)
{
    HWND hWnd = (HWND) pParam;

    ...

    // notify the primary thread's window
    ::PostMessage(hWnd, WM_MYMSG, 0, 0);

    return 0;
}
=========================================================

Windows programming interview questions
By admin | August 2, 2005

   1. What are kernel objects? - - Several types of kernel objects, such as access token objects, event objects, file objects, file-mapping objects, I/O completion port objects, job objects, mailslot objects, mutex objects, pipe objects, process objects, semaphore objects, thread objects, and waitable timer objects.
   2. What is a kernel object? - Each kernel object is simply a memory block allocated by the kernel and is accessible only by the kernel. This memory block is a data structure whose members maintain information about the object. Some members (security descriptor, usage count, and so on) are the same across all object types, but most are specific to a particular object type. For example, a process object has a process ID, a base priority, and an exit code, whereas a file object has a byte offset, a sharing mode, and an open mode.

   3. User can access these kernel objects structures? - Kernel object data structures are accessible only by the kernel
   4. If we cannot alter these Kernel Object structures directly, how do our applications manipulate these kernel objects? - The answer is that Windows offers a set of functions that manipulate these structures in well-defined ways. These kernel objects are always accessible via these functions. When you call a function that creates a kernel object, the function returns a handle that identifies the object.
   5. How owns the Kernel Object? - Kernel objects are owned by the kernel, not by a process
   6. How does the kernel object outlive the process that created it? - If your process calls a function that creates a kernel object and then your process terminates, the kernel object is not necessarily destroyed. Under most circumstances, the object will be destroyed; but if another process is using the kernel object your process created, the kernel knows not to destroy the object until the other process has stopped using it
   7. Which is the data member common to all the kernel object and what is the use of it? -

       The usage count is one of the data members common to all kernel object types
   8. How to identify the difference between the kernel object and user object? -

       The easiest way to determine whether an object is a kernel object is to examine the function that creates the object. Almost all functions that create kernel objects have a parameter that allows you to specify security attribute information.
   9. What is the purpose of Process Handle Table? -

      When a process is initialized, the system allocates a handle table for it. This handle table is used only for kernel objects, not for User objects or GDI objects. When a process first initializes, its handle table is empty. Then when a thread in the process calls a function that creates a kernel object, such as CreateFileMapping , the kernel allocates a block of memory for the object and initializes it; the kernel then scans the process’s handle table for an empty entry
  10. Name few functions that create Kernel Objects? - HANDLE CreateThread(…),HANDLE CreateFile(..),HANDLE CreateFileMapping(..)HANDLE CreateSemaphore(..)etcAll functions that create kernel objects return process-relative handles that can be used successfully by any and all threads that are running in the same process.
  11. What is handle? - Handle value is actually the index into the process’s handle table that identifies where the kernel object’s information is stored.
  12. How the handle helps in manipulating the kernel objects? - Whenever you call a function that accepts a kernel object handle as an argument, you pass the value returned by one of the Create* functions. Internally, the function looks in your process’s handle table to get the address of the kernel object you want to manipulate and then manipulates the object’s data structure in a well-defined fashion.
  13. What happens when the CloseHandle(handle) is called? - This function first checks the calling process’s handle table to ensure that the index (handle) passed to it identifies an object that the process does in fact have access to. If the index is valid, the system gets the address of the kernel object’s data structure and decrements the usage count member in the structure; if the count is zero, the kernel destroys the kernel object from memory.
  14. You forget to call CloseHandle - will there be a memory leak? - Well, yes and no. It is possible for a process to leak resources (such as kernel objects) while the process runs. However, when the process terminates, the operating system ensures that any and all resources used by the process are freed—this is guaranteed. For kernel objects, the system performs the following actions: When your process terminates, the system automatically scans the process’s handle table. If the table has any valid entries (objects that you didn’t close before terminating), the system closes these object handles for you. If the usage count of any of these objects goes to zero, the kernel destroys the object.
  15. What is the need of process relative handles? - The most important reason was robustness. If kernel object handles were system-wide values, one process could easily obtain the handle to an object that another process was using and wreak havoc on that process. Another reason for process-relative handles is security. Kernel objects are protected with security, and a process must request permission to manipulate an object before attempting to manipulate it. The creator of the object can prevent an unauthorized user from touching the object simply by denying access to it
  16. How the handles are handled in the child process? - The operating system creates the new child process but does not allow the child process to begin executing its code right away. Of course, the system creates a new, empty process handle table for the child process—just as it would for any new process. But because you passed TRUE to CreateProcess’s bInheritHandles parameter, the system does one more thing: it walks the parent process’s handle table, and for each entry it finds that contains a valid inheritable handle, the system copies the entry exactly into the child process’s handle table. The entry is copied to the exact same position in the child process’s handle table as in the parent’s handle table.
  17. Why the entries in the parent process table and child table are same? - It means that the handle value that identifies a kernel object is identical in both the parent and the child processes.
  18. What about the usage count in the parent child process tables? - The system increments the usage count of the kernel object because two processes are now using the object. For the kernel object to be destroyed, both the parent process and the child process must either call CloseHandle on the object or terminate.
  19. What are Named Objects? - Method available for sharing kernel objects across process boundaries is to name the objects. Below are the kernel named objects 1) mutex, 2) Events, 3) semaphore, 4) waitableTimers, 5)file mapping, 6)job object. There are APIs to create these objects with last parameter as the object name.
  20. What do you mean by unnamed object? - When you are creating the kernel objects with the help of API’s like CreateMutex(, , , ,pzname). And the Pzname parameter is NULL , you are indicating to the system that you want to create an unnamed (anonymous) kernel object. When you create an unnamed object, you can share the object across processes by using either inheritance or DuplicateHandle
  21. What is DuplicateHandle (API)? - Takes an entry in one process’s handle table and makes a copy of the entry into another process’s handle table
  22. What is a thread? - A thread describes a path of execution within a process. Every time a process is initialized, the system creates a primary thread. This thread begins executing with the C/C++ run-time library’s startup code, which in turn calls your entry-point function ( main , Wmain , WinMain , or WWinMain ) and continues executing until the entry-point function returns and the C/C++ run-time library’s startup code calls ExitProcess
  23. What is the limit on per process for creating a thread? - The number of threads a process can create is limited by the available virtual memory and depends on the default stack size
  24. What is Synchronization Objects? - Synchronization object s are use to co-ordinate the execution of multiple threads.
  25. Which kernel objects are use for Thread Synchronization on different processes? - Event, Mutex, Semaphore
  26. What is Event Object and why it is used? - Event is the thread synchronization object to set signaled state or non-signaled state.
  27. What is signaled and non signaled state? - An event is in signaled state means that it has the capacity to release the threads waiting for this event to be signaled. An event is in non signaled state means that it will not release any thread that is waiting for this particular event.example in our project: when user clicks the image application icon double simultaneously. Two image application windows were created. so PAIG created an event and set it to non-signaled state. Then the image application will reset the event to signaled state, after this all the threads are released.
  28. APIs for creating event and set and reset the events - CreateEvent-to create the event OpenEvent – to open already created event SetEvent – to set the event signaled stateRestEvent - To set the Event To non-Signaled State
  29. What is Mutex Object and why it is used? - A mutex object is a synchronization object whose state is set to signaled when it is not owned by any thread, and non-signaled when it is owned. For example, to prevent two threads from writing to shared memory at the same time, each thread waits for ownership of a mutex object before executing the code that accesses the memory. After writing to the shared memory, the thread releases the mutex object.
  30. How do I create a Mutex? - A thread uses the CreateMutex function to create a mutex object. The creating thread can request immediate ownership of the mutex object and can also specify a name for the mutex object
  31. How do other threads own the mutex? - Threads in other processes can open a handle to an existing named mutex object by specifying its name in a call to theOpenMutex - function. Any thread with a handle to a mutex object can use one of the wait functions to request ownership of the mutex object. If the mutex object is owned by another thread, the wait function blocks the requesting thread until the owning thread releases the mutex object using theReleaseMutex - function.
  32. What is semaphores and why it is used? - A semaphore object is a synchronization object that maintains a count between zero and a specified maximum value. The count is decremented each time a thread completes a wait for the semaphore object and incremented each time a thread releases the semaphore. When the count reaches zero, no more threads can successfully wait for the semaphore object state to become signaled. The state of a semaphore is set to signaled when its count is greater than zero, and non-signaled when its count is zero. The semaphore object is useful in controlling a shared resource that can support a limited number of users. It acts as a gate that limits the number of threads sharing the resource to a specified maximum number. For example, an application might place a limit on the number of windows that it creates. It uses a semaphore with a maximum count equal to the window limit, decrementing the count whenever a window is created and incrementing it whenever a window is closed. The application specifies the semaphore object in call to one of the wait functions before each window is created. When the count is zero - indicating that the window limit has been reached - the wait function blocks execution of the window-creation code.






Tell the differences between Windows 95 and Windows NT? Lack of Unicode implementation for most of the functions of Win95. Different extended error codes. Different number window and menu handles. Windows 95 implements some window management features in 16 bits. Windows 95 uses 16-bit world coordinate system and the coordinates restricted to 32K. Deletion of drawing objects is different. Windows 95 does not implement print monitor DLLs of Windows NT. Differences in registry. Windows 95 does not support multiprocessor computers. NT implementation of scheduler is quite different. Different driver models. Win95 was built with back-compatibility in mind and ill-behaving 16-bit process may easily corrupt the system. Win95 starts from real DOS, while WinNT uses DOS emulation when one needs a DOS. Win95’s FAT is built over 16-bit win3.1 FAT (not FAT32!, actually, Win95’s FAT contains two FATs).
What is the effective way of DIB files management? A: Memory-mapped file is the best choice for device-independent bitmaps. MMF allows to map the file to RAM/SWAP addresses and to let Windows handle all load/unload operations for the file. What should you be aware of if you design a program that runs days/weeks/months/years? A: When your program should run for a long time, you should be careful about heap allocations, because if you use new/delete intensively in your application, the memory becomes highly fragmented with a time. It is better to allocate all necessary memory in this case that many times small blocks. You should be especially careful about CString class which allocates permanent DLL What are the advantages of using DLL’s? DLLs are run-time modular. DLL is loaded when the program needs it. Used as a code sharing between executables. What are the different types of DLL’s? A: Extension, Regular and pure Win32 DLL (without MFC) What are the differences between a User DLL and an MFC Extension DLL? A: Extension DLL supports a C++ interface, i.e. can export whole C++ classes and the client may construct objects from them. Extension DLL dynamically links to MFC DLLs (those which name starts with MFC??.DLL) and to be synchronous with the version it was developed for. Extension DLL is usually small (simple extension DLL might be around 10K) Regular DLL can be loaded by any Win32 environment (e.g. VB 5) Big restriction is that regular DLL may export only C-style functions. Regular DLLs are generally larger. When you build a regular DLL, you may choose a static link (in this case MFC library code is copied to your DLL) and dynamic (in this case you would need MFC DLLs to be presented on the target machine) What do you have to do when you inherit from two CObject-based classes? A: First of all, this is a bad idea does not matter what tells you interviewer. Secondly, if you forced to use condemned rhombus structure, read Technical Note 16 in MSDN, which discusses why MFC does not support multiple inheritance and what to do in case you still need it (there are a few problems with CObject class, such as incorrect information, returned by IsKindOf() of CObject for MI, etc.) What are the additional requirements for inheritance from CWnd-based classes? A: Again, this is the bad idea. Try to find alternative solution. Anyway, if you have to multiply inherit from CWnd-based class, the following are additional requirements to the above conditions (again, this is extremely bad question for interview!!!): There must be only one CWnd-derived base class. The CWnd-derived base class must be the first (or left-most) base class. What is a "mutex"? A: Mutexes are the mechanism of process synchronization that might be used to synchronize data across multiple processes. Mutex is a waitable object while a critical section is not. Mutexes are significantly slower than critical sections. What’s the difference between a "mutex" and a "critical section"? Critical section provides synchronization means for one process only, while mutexes allow data synchronization across processes. What might be wrong with the following pseudo-code:
FUNCTION F
BEGIN
INT I=2
DO
I = I + 1
IF I = 4 THEN BREAK
END DO
END
A:This code is not thread safe. Suppose one thread increments I to 3 and then returns to the beginning of DO statement. Then it increments I to 4 and now context switch happens. Second thread increments I to 5. From this moment the code shown will execute forever until some external force intervention. Solution is obviously using some synchronization object to protect I from being changed by more than one thread. What is a deadlock ? A: A deadlock, very simply, is a condition in which two or more threads wait for each other to release a shared resource before resuming their execution. Because all threads participating in a deadlock are suspended and cannot, therefore, release the resources they own, no thread can continue, and the entire application (or, worse, more than one application if the resources are shared between threads in multiple applications) appears to hang. How can we create thread in MFC framework? A: Using AfxBeginThread. What types of threads are supported by MFC framework? A: Working thread and windows thread. Working thread usually does not have a user interface and easier to use. Windows thread has an user interface and usually used to improve responsiveness of the user input. Message Map When ON_UPDATE_COMMAND_UI is called? (message may vary) A: When a user of your application pulls down a menu, each menu item needs to know whether it should be displayed as enabled or disabled. The target of a menu command provides this information by implementing an ON_UPDATE_COMMAND_UI handler. What is a "hook"? A: A point in the Windows message-handling mechanism where an application can install a subroutine to monitor messages. You need hooks to implement your own Windows message filter. What are the difference between MFC Exception macros and C++ exception keywords? A:Actually, MFC macros may accept exception of only CException class or class, derived from CException, where as C++ exception mechanism accepts exception of ANY type Reusable Control Class How would you set the background of an edit control to a customized color? A: You have several choices, but the simplest one is subclassing. Kruglinski in his "Inside Visual C++" describes pretty well this process. Generally, you derive the class from none control class, override the messages you want (like WM_CTLCOLOR) and then in init function like OnInitialUpdate of CDialog, subclass the control with SubclassDlgItem(). What is Message Reflection? How could you accomplish the above task using message reflection? A: See Technical Note 62 of MSDN. Usually, message is handled in the parent class that means you have to override message handler for each parent. Sometimes it is nice to handle a message in the control itself, without parent invocation. Such handling mechanism is called message reflection. Control "reflects" message to itself and then processes it. Use ON__REFLECT macro to create a reflected message. What is the command routing in MFC framework? A: CView => CDocument => CFrameWnd => CWinApp What’s the purpose of CView class? CDocument class? What are relationships between them? A: The CView class provides the basic functionality for user-defined view classes. A view is attached to a document and acts as an intermediary between the document and the user: the view renders an image of the document on the screen or printer and interprets user input as operations upon the document. The CDocument class provides the basic functionality for user-defined document classes. A document represents the unit of data that the user typically opens with the File Open command and saves with the File Save command. Users interact with a document through the CView object(s) associated with it. A view is a child of a frame window. The relationship between a view class, a frame window class, and a document class is established by a CDocTemplate object. A view can be attached to only one document, but a document can have multiple views attached to it at once. What class is responsible for document template in MDI application? A: CMultiDocTemplate. What function must be used to add document template? A: AddDocTemplate. What the main objects are created for SDI and MDI applications? A: CWinApp - application object. For MDI application with New document implementation CDocTemplate, CDocument, CView, CMainFrame. If your application is SDI, your CMainFrame class is derived from class CFrameWnd. If your application is MDI, CMainFrame is derived from class CMDIFrameWnd. For MDI application CMDIChildWindow is also created. We have a loop for 800,000. It fails on 756,322. How can we get the information before it fails? A: You could think of several way to debug this: Set the condition in debugger to stop when loop is passed around 756321 times. Throw an exception within a loop (may be not the best idea since exception does not show you the exact location of the fail. Create a log file and to put detailed information within a loop. Our Debug version works fine, but Release fails. What should be done? A: There are four differences between debug and release builds:
heap layout (you may have heap overwrite in release mode - this will cause 90% of all problems),
compilation (check conditional compilation statements, assertion functions etc.),
pointer support (no padding in release mode which may increase chances of a pointer to point into sky)
optimization.


fly