Saturday, November 17, 2007
**Grid computing is a phrase in distributed computing which can have several meanings:
1->A local computer cluster which is like a "grid" because it is composed of multiple nodes.
2-> Offering online computation or storage as a metered commercial service, known as utility computing, computing on demand, or cloud computing.
3->The creation of a "virtual supercomputer" by using spare computing resources within an organization.
4-> The creation of a "virtual supercomputer" by using a network of geographically dispersed computers. Volunteer computing, which generally focuses on scientific, mathematical, and academic problems, is the most common application of this technology.
5->These varying definitions cover the spectrum of "distributed computing", and sometimes the two terms are used as synonyms. This article focuses on distributed computing technologies which are not in the traditional dedicated clusters; otherwise, see computer cluster.
Functionally, one can also speak of several types of grids:
Computational grids (including CPU Scavenging grids) which focuses primarily on computationally-intensive operations.
Data grids or the controlled sharing and management of large amounts of distributed data.
Equipment grids which have a primary piece of equipment e.g. a telescope, and where the surrounding Grid is used to control the equipment remotely and to analyze the data produced.
BENEFITS:-
1,Flexibility to meet changing business needs
2,High quality of service at low cost
3,Faster computing for better information
4,Investment protection and rapid ROI
5,A shared infrastructure environment
Grids versus conventional supercomputers:-
"Distributed" or "grid" computing in general is a special type of parallel computing which relies on complete computers (with onboard CPU, storage, power supply, network interface, etc.) connected to a network (private, public or the Internet) by a conventional network interface, such as Ethernet. This is in contrast to the traditional notion of a supercomputer, which has many processors connected by a local high-speed computer bus.
The primary advantage of distributed computing is that each node can be purchased as commodity hardware, which when combined can produce similar computing resources to a multiprocessor supercomputer, but at lower cost. This is due to the economies of scale of producing commodity hardware, compared to the lower efficiency of designing and constructing a small number of custom supercomputers. The primary performance disadvantage is that the various processors and local storage areas do not have high-speed connections. This arrangement is thus well-suited to applications in which multiple parallel computations can take place independently, without the need to communicate intermediate results between processors.
The high-end scalability of geographically dispersed grids is generally favorable, due to the low need for connectivity between nodes relative to the capacity of the public Internet. Conventional supercomputers also create physical challenges in supplying sufficient electricity and cooling capacity in a single location. Both supercomputers and grids can be used to run multiple parallel computations at the same time, which might be different simulations for the same project, or computations for completely different applications. The infrastructure and programming considerations needed to do this on each type of platform are different, however.
There are also differences in programming and deployment. It can be costly and difficult to write programs so that they can be run in the environment of a supercomputer, which may have a custom operating system, or require the program to address concurrency issues. If a problem can be adequately parallelized, a "thin" layer of "grid" infrastructure can allow conventional, standalone programs to run on multiple machines (but each given a different part of the same problem). This makes it possible to write and debug programs on a single conventional machine, and eliminates complications due to multiple instances of the same program running in the same shared memory and storage space at the same time.
Design considerations and variations:-
One feature of distributed grids is that they can be formed from computing resources belonging to multiple individuals or organizations (known as multiple administrative domains). This can facilitate commercial transactions, as in utility computing, or make it easier to assemble volunteer computing networks.
One disadvantage of this feature is that the computers which are actually performing the calculations might not be entirely trustworthy. The designers of the system must thus introduce measures to prevent malfunctions or malicious participants from producing false, misleading, or erroneous results, and from using the system as an attack vector. This often involves assigning work randomly to different nodes (presumably with different owners) and checking that at least two different nodes report the same answer for a given work unit. Discrepancies would identify malfunctioning and malicious nodes.
Due to the lack of central control over the hardware, there is no way to guarantee that nodes will not drop out of the network at random times. Some nodes (like laptops or dialup Internet customers) may also be available for computation but not network communications for unpredictable periods. These variations can be accommodated by assigning large work units (thus reducing the need for continuous network connectivity) and reassigning work units when a given node fails to report its results as expected.
The impacts of trust and availability on performance and development difficulty can influence the choice of whether to deploy onto a dedicated computer cluster, to idle machines internal to the developing organization, or to an open external network of volunteers or contractors.
Although hubs and switches both glue the PCs in a network together, a switch is more expensive and a network built with switches is generally considered faster than one built with hubs. Why?
When a hub receives a packet (chunk) of data (a frame in Ethernet lingo) at one of its ports from a PC on the network, it transmits (repeats) the packet to all of its ports and, thus, to all of the other PCs on the network. If two or more PCs on the network try to send packets at the same time a collision is said to occur. When that happens all of the PCs have to go though a routine to resolve the conflict. The process is prescribed in the Ethernet Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol. Each Ethernet Adapter has both a receiver and a transmitter. If the adapters didn't have to listen with their receivers for collisions they would be able to send data at the same time they are receiving it (full duplex). Because they have to operate at half duplex (data flows one way at a time) and a hub retransmits data from one PC to all of the PCs, the maximum bandwidth is 100 Mhz and that bandwidth is shared by all of the PC's connected to the hub. The result is when a person using a computer on a hub downloads a large file or group of files from another computer the network becomes congested. In a 10 Mhz 10Base-T network the affect is to slow the network to nearly a crawl. The affect on a small, 100 Mbps (million bits per scond), 5-port network is not as significant.
Two computers can be connected directly together in an Ethernet with a crossover cable. A crossover cable doesn't have a collision problem. It hardwires the Ethernet transmitter on one computer to the receiver on the other. Most 100BASE-TX Ethernet Adapters can detect when listening for collisions is not required with a process known as auto-negotiation and will operate in a full duplex mode when it is permitted. The result is a crossover cable doesn't have delays caused by collisions, data can be sent in both directions simultaneously, the maximum available bandwidth is 200 Mbps, 100 Mbps each way, and there are no other PC's with which the bandwidth must be shared.
An Ethernet switch automatically divides the network into multiple segments, acts as a high-speed, selective bridge between the segments, and supports simultaneous connections of multiple pairs of computers which don't compete with other pairs of computers for network bandwidth. It accomplishes this by maintaining a table of each destination address and its port. When the switch receives a packet, it reads the destination address from the header information in the packet, establishes a temporary connection between the source and destination ports, sends the packet on its way, and then terminates the connection.
Picture a switch as making multiple temporary crossover cable connections between pairs of computers (the cables are actually straight-thru cables; the crossover function is done inside the switch). High-speed electronics in the switch automatically connect the end of one cable (source port) from a sending computer to the end of another cable (destination port) going to the receiving computer on a per packet basis. Multiple connections like this can occur simultaneously. It's as simple as that. And like a crossover cable between two PCs, PC's on an Ethernet switch do not share the transmission media, do not experience collisions or have to listen for them, can operate in a full-duplex mode, have bandwidth as high as 200 Mbps, 100 Mbps each way, and do not share this bandwidth with other PCs on the switch. In short, a switch is "more better."
Thursday, November 1, 2007
Microsoft Windows Vista
Feel the Magic of windows Vista:-
(.......)
Windows Vista:-
Windows Vista contains hundreds of new and reworked features; some of the most significant include an updated graphical user interface and visual style dubbed Windows Aero, improved searching features, new multimedia creation tools such as Windows DVD Maker, and completely redesigned networking, audio, print, and display sub-systems. Vista also aims to increase the level of communication between machines on a home network using peer-to-peer technology, making it easier to share files and digital media between computers and devices. For developers, Vista includes version 3.0 of the .NET Framework, which aims to make it significantly easier for developers to write applications than with the traditional Windows API.
Sunday, October 28, 2007
A SHORT NOTE ABOUT SECURITY:-
Security:-
Many operating systems include some level of security. Security is based on the two ideas that:
The operating system provides access to a number of resources, directly or indirectly, such as files on a local disk, privileged system calls, personal information about users, and the services offered by the programs running on the system;
The operating system is capable of distinguishing between some requesters of these resources who are authorized (allowed) to access the resource, and others who are not authorized (forbidden). While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. Requesters, in turn, divide into two categories:
Internal security:-
an already running program. On some systems, a program once it is running has no limitations, but commonly the program has an identity which it keeps and is used to check all of its requests for resources.
External security:
a new request from outside the computer, such as a login at a connected console or some kind of network connection. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all.
In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?").
Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information.
SanDisk has begun shipping its 8GB microSDHC ......!!!!!!!!!!
SanDisk has begun shipping its 8GB microSDHC and M2 flash memory cards. The company hopes that they will find a market among users of memory-card-ready mobile phones. Available now for $140 and $150, respectively.
SanDisk Corporation (NASDAQ: SNDK), formerly SunDisk, is an American multinational corporation which designs and markets flash memory card products. SanDisk was founded in 1988 by Eli Harari and Sanjay Mehrotra, a non-volatile memory technology expert. SanDisk became a publicly traded company on NASDAQ in November 1995. SanDisk produces many different types of flash memory, including various memory cards and a series of USB removable drives. SanDisk markets to both the high and low-end quality flash memory.
For more click link--> [Electronista]
Windows Vista
(Part of the Microsoft Windows family
Thursday, October 25, 2007
Jobs 4 you..!!!!
IT Openings.................BPO Openings.................MBA Openings
Testing Openings........finance Jobs.....................Steel Jobs
Online JOB Tips..................Syntel - Fresh & Exp
windows new OS vienna
how to lock a folder...गूढ़ वन फॉर उ..!!!!
The Intel® Core™ Duo processor breaks new ground. Its dual-core technology rewrites the rules of computing, delivering optimized power efficient computing and breakthrough dual-core performance with amazingly low power consumption. Intel Core Duo processor is available in Intel's premium laptop platform, Intel® Centrino® Duo mobile technology+. It can also be found in select Intel® Viiv™ technology-based systems
Features and benefits:-
Outstanding dual-core performance:-
With its two execution cores, the Intel Core Duo processor is optimized for multi-threaded applications and multitasking. You can simultaneously run multiple demanding applications such as graphics-intensive games or serious number-crunching programs - while downloading music or running virus-scanning security programs in the background.
Power efficiency:-
Demand for greater power efficiency in computing is on the rise from desktop to laptop PCs. With an Intel Core Duo processor, you get a balance of great dual-core computing capabilities and power savings. Its enhanced voltage efficiency supports cooler and quieter system designs as compared to traditional desktop and laptop PCs. And thanks to the innovative energy efficient technologies built-in, the Intel® Core™ Duo processor is able to transfer power only to those areas of the processor that need it, thereby enabling laptops to save power and desktops to have thinner, sleeker designs.
A vibrant media experience:-
The Intel Core Duo processor enables your Intel Viiv technology and Intel Centrino Duo mobile technology multimedia experience to be all the more vibrant. Featuring Intel® Digital Media Boost, the Intel® Core™ Duo processor enables accelerating technologies for applications such as CAD tools, 3D and 2D modeling, video editing, digital music, digital photography and gaming. This is one of the key ingredients that help Intel Viiv technology and Intel Centrino Duo mobile technology to give you a truly rich multimedia experience
A vibrant media experience:-
The Intel Core Duo processor enables your Intel Viiv technology and Intel Centrino Duo mobile technology multimedia experience to be all the more vibrant. Featuring Intel® Digital Media Boost, the Intel® Core™ Duo processor enables accelerating technologies for applications such as CAD tools, 3D and 2D modeling, video editing, digital music, digital photography and gaming. This is one of the key ingredients that help Intel Viiv technology and Intel Centrino Duo mobile technology to give you a truly rich multimedia experience.
Smarter, more efficient designs:-
The Intel Core Duo processor features Intel® Smart Cache which helps deliver a smarter and more efficient cache and bus design to enable enhanced dual-core performance, and power savings.
An essential ingredient in Intel® Centrino® Duo mobile technology
The Intel® Core™ Duo processor is Intel's first mobile dual-core processor and a key component of the new Intel Centrino Duo mobile technology platform.
Folder Lock without any S/W:-
Many people have been asking for an alternative way to lock folders without the use of any alternative software। So, here you go
1. Open Notepad and copy the below code 2. Change your password in the code it's shown the place where to type your password. 3. Save file as locker.bat . 4. Now double click on locker .bat 5. I t will create folder with Locker automatically for u. After creation of the Locker folder, place the contents u want to lock inside the Locker Folder and run locker.bat again .
******* ********* ********* ********* ********* ****
cls
@ECHO OFF
title Folder Lockerif EXIST "Control Panel।{21EC2020- 3AEA-1069- A2DD-08002B30309 D}" goto
if NOT EXIST Locker goto MDLOCKER
:CONFIRM
echo Are you sure u want to Lock the folder(Y/N)
set/p "cho=>"
if %cho%==Y goto lock
if %cho%==y goto lock
if %cho%==n goto END
if %cho%==N gotoEND
echo Invalid choice
goto CONFIRM
:lock
ren Locker "Control Panel।{21EC2020- 3AEA-1069- A2DD-08002B30309 D}"
attrib +h +s "Control Panel।{21EC2020- 3AEA-1069- A2DD-08002B30309 D}"
echo Folder locked
goto END
:UNLOCK
echo Enter password to Unlock folder
set/p "pass=>"
if NOT %pass%== type your password here goto FAIL
attrib -h -s "Control Panel।{21EC2020- 3AEA-1069- A2DD-08002B30309 D}"
ren "Control Panel।{21EC2020- 3AEA-1069- A2DD-08002B30309 D}" Locker
echo Folder Unlocked successfully
goto END
:FAIL
echo Invalid password
goto END
:MDLOCKER
md Locker
echo Locker created successfully
goto END
:END