Content Delivery Networks (CDN) Research Dire...

来源:百度文库 编辑:神马文学网 时间:2024/04/29 01:21:53
Content Delivery Networks (CDN) Research Directory
The proliferation of Content Delivery Networks (CDN) has created a thrust among the research community to conduct research on related issues in the field of Content Delivery and Distribution. Some of the blistering areas of research for Content Networks are: CDN placement, Content Selection, Request Routing and Resource Optimization, Content Outsourcing, Content Replication and Caching, Traffic Congestion and Load Dissemination, CDN peering, Content Pricing and so on.This site is dedicated to provide useful information to the researchers who are working on Content Networks.
Upcoming Book on Content Delivery Networks (Springer) New
The members ofGRIDS Laboratory, theUniversity of Melbourne are working to bring out an edited book on Content Delivery Networks (published by Springer). More information can be foundhere.
Insight into Content Delivery Networks (CDN)
What is CDN?:
Content Delivery Networks (CDN), evolved first in 1998, replicate contents over several mirrored web servers (i.e., surrogate servers) strategically placed at various locations in order to deal with the flash crowds. Geographically distributing the web servers’ facilities is a method commonly used by service providers to improve performance and scalability. A CDN has some combination of a content-delivery infrastructure, a request?routing infrastructure, a distribution infrastructure and an accounting infrastructure. CDNs improve network performance by maximizing bandwidth, improving accessibility and maintaining correctness through content replication and thus offer fast and reliable applications and services by distributing content to cache servers located close to users.

Figure 1: Abstract architecture of a Content Delivery Network (CDN)
Figure 1 shows a typical content delivery environment where the replicated Web server clusters are located at the edge of the network to which the end-users are connected. In such CDN environment, Web content based on user requests are fetched from the origin server and a user is served with the content from the nearby replicated web server. Thus the user ends up communicating with a replicated CDN server close to it and retrieves files from that server.

Figure 2: Content/services provided by a CDN
CDN providers host third-party contents for fast delivery of any digital content, including static contents (e.g. Static HTML pages, images, documents, software patches etc), streaming media (e.g. audio, real time video etc) and varying content services (e.g. directory service, e-commerce service, file transfer service etc.). The sources of contents are large enterprises, web service providers, media companies, news broadcasters etc. The clients interact with the CDN specifying the content/service request through cell phone, smart phone/PDA, laptop, desktop etc. Figure 2 depicts the different content/services served by the CDN to different clients.
Basic interactions in a CDN

Figure 3: Basic interaction flows in a CDN environment
Figure 3 provides the high level view of the basic interaction flows among the components in a Content Delivery Network (CDN) environment. Here, discovery.com is the content provider and Akamai is the CDN that hosts the content of discovery.com. The interaction flows are: 1) the client requests content fromwww.discovery.com by specifying its URL in the Web browser. Client’s request is directed to the origin server of discovery.com; 2) when discovery.com receives a request, its web server makes a decision to provide only the basic contents (e.g. index page of the site) that can be served from its origin server; 3) to serve the high bandwidth demanding and frequently asked contents (e.g. embedded objects – fresh content, navigation bar, banner ads etc. Figure 4 shows such a web page which contains the embedded objects served by Akamai CDN), discovery’s origin server redirects client’s request to the CDN provider (Akamai, in this case); 4) using the proprietary selection algorithm, the CDN provider selects the replica server which is ‘closest’ to the client, in order to serve the requested embedded objects; 5) selected replica server gets the embedded objects from the origin server, serves the client requests and caches it for subsequent request servicing.

Figure 4: Typical embedded web page contents served by Akamai CDN.
International Activities on CDNs
Transactions/Journals/Magazines
IEEE Internet Computing
Communications of the ACM
IEEE Network
IEEE/ACM Transactions on Networking
Elsevier International Journal on Parallel and Distributed Computing (JPDC)
World Wide Web Internet and Web Information Systems
Symposiums
IEEE International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunications Systems (MASCOTS) (www.mascots-conference.org)
International Symposium on Computers and Communication (www.informatik.uni-trier.de/~ley/db/conf/iscc/index.html)
Symposium on Applied Computing (www.cs.iupui.edu/~bioin)
Conferences
IEEE GLOBECOM: Global Communications Conference (www.comsoc.org/confs/globecom/index.html)
IEEE INFOCOM: Conference on Computer Communications (www.ieee-infocom.org)
ACM SIGCOMM Conference (www.acm.org/sigs/sigcomm)
ACM SIGMETRICS (www.sigmetrics.org/)
Internet Measurement Conference (www.imconf.net)
International World Wide Web Conference (www.iw3c2.org)
IEEE ICC: International Conference on Communications (www.ieee-icc.org/)
Workshops
Workshop on the Use of P2P, GRID and Agents for the Development of Content Networks (http://lisdip.deis.unical.it/workshops/upgrade-cn07/index.htm) The International Web Content Caching and Distribution Workshop (http://www.iwcw.org/) mCDN Workshop (www.comtec.e-technik.uni-kassel.de/content/projects/mcdn/) Workshop on Content Distribution Networks (www2.cs.ucy.ac.cy/~lambrosl/wccdn/) Advanced Workshop on Content Computing (www.i-awcc.org/)
Existing CDNs
Commercial CDNs
There are many commercial CDNs (e.g., Akamai, Adero, Digital Island, Mirror Image, LocalMirror, Inktomi, LimeLight Network etc). The table below provides a brief overview of the present CDNs in terms of infrastructure services and coverage.
CDNs
Overview
Accelia
[www.accelia.net]
Accelia Inc., established in 2000 in Japan is a Content Distribution Service (CDS) provider.  It provides internet infrastructure and technologies that are required to facilitate reliable and timely distribution of Internet data and content. Accelia offers distributed content distribution services for static and streaming contents. Accelia’s load balancing network service eliminates delays over the Internet by placing caching servers at distributed locations in order to decentralize the local and global Internet traffic flow. User requests for content are directed to the nearby Accelia cache (surrogate) servers through DNS-based request redirection. The caching servers provide synchronous as well as up?to-date content from the origin server. Accelia’s DNS server monitors the status of each cache site and evaluates the traffic pattern.
Accelia’s CDS “DuraSite” is used widely by major media companies, Internet Ads companies and event sites in Japan. Currently, Accelia provides services to Internet data centers, Internet service providers and other carrier companies spreading in 5 countries of Asia Pacific.
Accellion
[www.accellion.com]
Accellion is a privately held company headquartered in Palo Alto, California with offices in North America, Asia and Europe. It provides large scale file delivery service. Accellion’s products are built on the SeOS (SmartEdge Operating System) technology. It is a distributed file storage and transmission infrastructure for enterprise applications. The SeOS technology, which can scale globally, enables Accellion to move, replicate, and manage large sized files efficiently and intelligently. It unifies and manages multiple storage types, across geographically dispersed locations, using a range of transport and delivery protocols.
Accellion Courier Secure File Transfer Appliances (SFTA) is an on?demand file transfer solution for securely exchanging files. Accellion courier sends large attachments outside of the e-mail infrastructure yet providing the convenience of e-mail both to sender and receiver. The sender sends large sized files – including gigabyte-sized files — through a Web-based interface and the receiver received an e-mail with an embedded, secure HTTP link. It allows enterprises to eliminate FTP servers, improve e-mail infrastructure performance, and reduce IT management footprint requirements. Accellion also provides online desktop and server backup and recovery solutions through Accellion Backup and Recovery Solutions (BRS). Accellion customers are industries such as advertising/Media production, manufacturing, healthcare, consumer goods, higher education etc.
Activate
[www.active.com]
Activate is a business-to-business digital media solution provider and the supplier of white label digital music platforms across Europe. It is a provider of streaming and media caching. Activate offers end-to-end digital media solutions, both white-label and custom, for downloading and streaming music services to PCs, mobile phones and set top boxes. Activate provides a fully integrated Mobile service featuring over the air download and full catalogue availability with Search, Browse, and Wishlist functionalities. Activate offers Single-sign-on services and integrated “direct-to-operator-bill” services to be developed by giving access to the business tier objects of a system, through providing Web service layers.
It has more than 75 live services in over 20 countries and multiple languages across Europe and the rest of the world. Typical customers of Activate are Music retailers, Internet Service Providers, Mobile Operators, Consumer Electronics Manufacturers, and media companies. Across 20 countries, Activate has 75 customers including Coca-cola, MSN (Pan-Europe),MTV, Nokia, Tiscali, Wanadoo, and many more.
Akamai
[www.akamai.com]
Akamai technologies evolved out of an MIT research effort aimed at solving the flash crowd problem. It is the market leader in providing content delivery services. It owns more than 18,000 servers over 1000 networks in 70 countries. Akamai’s approach is based on the observation that serving Web content from a single location can present serious problems for site scalability, reliability and performance. Hence, a system is devised to serve requests from a variable number of surrogate origin servers at the network edge. Akamai servers deliver static, dynamic content and streaming audio and video.
Akamai’s infrastructure handles flash crowds by allocating more servers to sites experiencing high load, while serving all clients from nearby servers. The system directs client requests to the nearest available server likely to have the requested content. Akamai provides automatic network control through the mapping technique (i.e. the direction of request to content servers), which uses a dynamic, fault-tolerant DNS system. The mapping system resolves a hostname based on the service requested, user location and network status. It also uses DNS for network load-balancing. Akamai name servers resolve hostnames to IP addresses by mapping requests to a server. Akamai agents communicate with certain border routers as peers; the mapping system uses BGP information to determine network topology. The mapping system in Akamai combines the network topology information with live network statistics – such as traceroute data – to provide a detailed, dynamic view of network structure and quality measures for different mappings.
Akamai’s DNS-based load balancing system continuously monitors the state of services and their servers and networks. To monitor the entire system’s health end-to-end, Akamai uses agents that simulate end-user behavior by downloading Web objects and measuring their failure rates and download times. Akamai uses this information to monitor overall system performance and to automatically detect and suspend problematic data centers or servers. Each of the content servers frequently reports its load to a monitoring application, which aggregates and publishes load reports to the local DNS server. That DNS server then determines which IP addresses (two or more) to return when resolving DNS names. If a certain server’s load exceeds a certain threshold, the DNS server simultaneously assigns some of the server’s allocated content to additional servers. If the server’s load exceeds another threshold, the server’s IP address is no longer available to clients. The server can thus shed a fraction of its load when it experiences moderate to high load. The monitoring system in Akamai also transmits data center load to the top-level DNS resolver to direct traffic away from overloaded data centers. In addition to load balancing, Akamai’s monitoring system provides centralized reporting on content service for each customer and content server. This information is useful for network operational and diagnostic purposes.
AppStream [www.appstream.com]
AppStream is a private company funded by Draper Fisher Jurvetson, JK&B Capital, Goldman Sachs, Evergreen Partners, Sun Microsystems and Computer Associates. It is a provider of technology for on?demand software distribution and software license management tools for the extended enterprises. The AppStream platform is scalable and hardware investment is minimal with one server handling approximately 1000 users. AppStream allows users to launch any application from a browser or from a traditional desktop shortcut. With AppStream, software can be managed as service within an enterprise. Thus, the users in an enterprise are capable of streaming and caching both desktop and enterprise applications; since all application functionalities are preserved by AppStream, including interaction with peripherals and traditionally installed applications.
AppStream provides solutions in four key areas: Self-Service Software Distribution, Software License Management, Remote Software Access, and Virtual Image Distribution. Its product is AppStream Software 5.0, which is a self-service software distribution and license management platform. AppStream software divides applications into the minimum number of segments (streamlets) required to start the application on a client desktop, delivering the streamlets to the users as needed based upon their usage behavior. Users get the flavor of using a fully installed product; while from the business perspective, AppStream provides the high functionality of a locally installed application allowing centralized access and flexible scalability for any enterprise. The AppStream server communicates as a traditional Web application, using HTTP, with the Software Streaming Transfer Protocol (SSTP) running over HTTP for the most efficient delivery of application segments [63]. AppStream’s customers include Fortune 1,000 corporations, educational institutions and government.
EdgeStream[www.edgestream.com]
EdgeStream, based in Southern California provides disrupted video streaming applications over the public Internet. EdgeStream provides video on-demand and IPTV streaming software to enable cost effective and error-free transportation of high bit rate video over Internet. It ensures uninterrupted DVD quality video streams over consumer cable or ADSL model connections all over the world, even over paths that have 20 router hops between server and end user. EdgeStream developed Continuous Route Optimization (CROS) and Congestion Tunnel Through (ICTT) technologies that address the latency, packet loss, and congestion bottlenecks. EdgeStream’s network architecture allows operators to build an efficient delivery network with low investment and maintenance cost.
EdgeStream software is used for high quality video streaming. Embedded applications in Consumer Electronics Devices, wireless handheld devices, IP Set Top Boxes, and advanced digital TV’s can use the EdgeStream software for high quality video streaming. Typical users of EdgeStream software include network providers, telcos, portals, CDNs, ISPs, enterprises, content owners, and content aggregators. EdgeStream offers demonstration of its performance to the prospective customers through maintaining a streaming server network, and offers short term and long term video hosting services for a quick and cost effective roll out of video applications.
Globix
[www.globix.com]
Globix is a provider of Internet infrastructure and network services. It offers a comprehensive suite of services from network bandwidth, to the management of Web applications, servers, databases, to security, media streaming, collocation. Globix provide four types of services: Network Services, Hosting Services, Managed Services, and Media Services. Globix services are flexible, scalable, and cost-effective. Globix Network Services are flexible, scalable, and cost-effective with built-in reliability and SLAs. Globix Hosting service is provided with security and redundancy, and is connected to high speed Globix network. Under the Managed services, Globix offers security, storage, messaging, disaster recovery, monitoring, application and database management services. Globix also provides Media services to capture, store, host and distribute media content from live event production, encoding, presentation tools, and traffic analysis.
Globix load balancing service distributes traffic among multiple servers, sending the request to the least loaded server in the dedicated server cluster. Unlike software?based load balancer, the service is built with an ASIC-based hardware architecture that provides increased traffic performance. Globix offers comprehensive monitoring services to measure physical network and server hardware, Web and application services, and backend database performance.
Globix Internet infrastructure consists of both a trans-Atlantic/trans-continental IP backbone as well as an optical network throughout the Northeast and mid-Atlantic regions. Globix IP backbone connects the customers to the Internet via a high-capacity network, fully owned and operated by Globix. It has more than 1200 customers.
LimeLight Networks[www.limelightnetworks.com]
Limelight Networks is a content delivery network that provides distributed on-demand and live delivery of video, music, games and download. It has created a scalable system for distributed digital media delivery to large audiences. Limelight Networks has the following products: Limelight ContentEdge for distributed content delivery via HTTP, Limelight MediaEdge Streaming for distributed video and music delivery via streaming, and Limelight Custom CDN for custom distributed delivery solutions.
Limelight ContentEdge provides a highly reliable, scalable, and efficient delivery platform that guarantees content delivery on time, meeting up the Service Level Agreements (SLAs). Limelight MediaEdge Streaming is a powerful distribution platform that provides high performance services for live and on-demand streaming of audio and video content over the Internet. Limelight Networks has a flexible CDN platform that allows it to shape the CDN to meet any content provider’s specific needs and environment.
Typical Limelight Networks’ customers include companies who use Internet for product delivery, and deliver extreme volume of content to large audiences. Limelight Networks has surrogate servers located in 72 locations around the world including New York, Los Angeles, San Jose, London, Amsterdam, Tokyo, and Hong Kong.
LocalMirror[www.localmirror.com]
LocalMirror is a privately owned company that provides content distribution service by using globally dispersed cache nodes, advanced algorithms and smart routing technology. It offers ultra fast static content downloads and audio/video streaming for end-users. Content are pushed closer to the users for fast and cost-effective delivery using the LocalMirror Content Delivery Network (CDN) technology. LocalMirror CDN service supports virtually unlimited number of simultaneous connections for static and non-static audio and video streams depending on cache node locations and client traffic requirements. Thus, the LocalMirror CDN technology distributes file downloads and audio/video stream from the closest location with lower latency, thus providing better Internet experience.
LocalMirror Content Delivery Network is powered by UltraRoute™ and criticalDNS™ technologies. LocalMirror’s Global network load balancing offers no single point of failure since the client traffic is distributed among multiples cache nodes. CDNs of LocalMirror cache nodes and application servers are located in multiple countries and data centers around the globe utilizing top Tier-1 and Tier-2 ISP fiber connections.
Mirror Image
[www.mirror-image.com]
Mirror Image is a global network that is dedicated to provide online content, application and transaction delivery to the users around the world. It provides content delivery, streaming media, Web computing and reporting services. It offers solutions that allow customers a smarter way to create more engaging Web experiences for users worldwide.
Mirror Image exploits a global Content Access Point (CAP) infrastructure on top of the Internet to provide content providers, service providers and enterprises with a platform that delivers Web content to end users. As a secure and managed high-speed layer on top of the Internet, each CAP offloads origin servers and networks by intelligently placing content at locations closer to users world-wide. Mirror Image has surrogate servers located at network peering points in 22 countries across America, Europe and Asia, where the concentration of Web traffic and users is the highest. Customers of Mirror Image include Creative, Open Systems, and SiteRock.
Netli
[www.netli.com]
Netli is a privately owned company based in Mountain View, California. It is a provider of business quality Internet. It addresses the limitation of the Internet. The NetliOne platform is a global Application Delivery Network (ADN) that ensures short response time, improved visibility and control of applications over the Internet. Netli AND services are delivered over the NetliOne platform. NetliOne platform consists of a series of globally distributed Virtual Data Centers (VDCs) and Application Access Pointes (AAPs), a global DNS redirection and IP address mapping system, a high performance protocol and content optimization software, an online monitoring and reporting system, and a 24X7 network operations center.
Netli’s services are – NetLightning® for optimizing delivery of Web applications and content by providing sub-second response times and increased availability; NetliOffloadTM to deliver reliable and high performance infrastructure that meets enterprise requirements; NetliViewTM to provide near real-time information on the performance, availability, and usage pattern of business applications; and NetliContinuityTM to gain strategic control and management of data center resources. Netli has computer clusters in 13 cities around the world. Organizations such as HP, Thomson, Millipore, and Nielsen/NetRatings are using Netli’s services.
SyncCast
[www.synccast.com]
SyncCast is a leading content delivery network. It facilitates global load balancing, utilizing tier-1 Internet backbones. SyncCast offers complete solutions from application development, Web hosting and Internet connectivity to deployment and systems integration. It provides solutions for delivering digital content and related data via the Internet and other media. Synccast is designed to deliver highest quality content rapidly in a cost-effective manner. Synccast load balances client traffic through the use of load-balancing equipments from major components such as F5, Cisco and Foundry. Thus, Synccast allows the users to quickly connect to the streaming servers in multiple data centers over the most efficient network path, and the users perceive better network performance. Synccast offers Peer-to-Peer (P2P) streaming technologies to the streaming clients. Synccast P2P technology intelligently monitors the quality of audio/video stream to each user and it switches them to another stream source in occurrence of service quality degradation.
SyncCast is also a partner with leading technology companies such as Microsoft, Dell, and FotoKem. SyncCast’s clients include the Motion Picture Association of America, Walmart Music, Lions Gate Films, Microsoft, EMI Music Group, Technicolor and Billboard Radio.
Value CDN [www.valuecdn.com]
Value CDN offers low cost content delivery network for small to large web sites using our disperse located cache servers. Value CDN pushes content physically closer to the end user and servers files with much lower latency that traditional single-point web hosting solutions. We offer industry leading low-cost content delivery using our SilverNET CDN service. Value CDN is an European company and offer multiple points of presence in North America and in Europe, including, Germany, UK, Sweden and other countries.
VitalStream [www.vitalstream.com]
VitalStream is a provider for delivering video and audio streaming media, and broadcasting services to the users. VitalStream Small Business Services is a fully distributed network, which uses an advanced “Synchronous-When-Optimal” routing service to deliver content through congested exchange points on the Internet from the most optimal data center. The routing technology of VitalStream continuously monitors the traffic situation over all major Internet backbones and routes mission-critical data faster in a more reliable and managed way.
Vital Stream has multiple data centers and Point-of-Presence (POP) distributed around the globe. VitalStream serves a broad customer base including Fortune 500 corporations, movie studios, news broadcasters, music and radio companies, advertising agencies and educational institutions.
Academic CDNs
The following table provides an overview of the existing Academic CDNs.
CDNs
Overview
Codeen
[www.codeen.cs.princeton.edu] Codeen is developed at Princeton University, USA. It provides caching of Web content and redirection of HTTP requests. It is an academic testbed content distribution network built on top of PlanetLab. This testbed consists of a network of high performance proxy servers. Each of these proxy servers acts both as request redirectors and surrogate servers. They provide fast and robust Web content delivery service through cooperation and collaboration.
Codeen operates in the following way: 1) users set their internet caches to a nearby high bandwidth proxy that participates in the Codeen system; 2) requests to that proxy are then forwarded to an appropriate member of the system that has the file cached and that has sent recent updates showing that it is still alive. The file is forwarded to the proxy and then to the client. Thus even if the server response time is slow, as long as the content is cached on the system, serving requests to that file will be fast. It also means that the request will not be satisfied by the original server. For rare files this system could be slightly slower than downloading the file itself. The system is also subject to the constraint of number of participating proxies.
A number of projects are related to Codeen – CoBlitz (a scalable Web-based distribution system for large files), CoDeploy (an efficient synchronization tool for PlanetLab slices), CoDNS (a fast and reliable name lookup service), CoTop (a command line activity monitoring tool for PlanetLab), CoMon (a Web-based slice monitor that monitors most PlanetLab nodes), and CoTest (a login debugging tool). All accesses via Codeen are logged. These log files are monitored for performance measurement of Codeen system.
COMODIN COMODIN (COoperative Media On-Demand on the InterNet) is an academic streaming CDN providing collaborative media streaming services on the current Internet infrastructure. It is currently deployed on an international distributed testbed, which is composed of three logical sub-networks, SATRD, LISDIP and ICAR-CS intranets, located at different geographical locations and connected through the Internet. The two plane-based architecture of COMODIN consists of an IP-multicast enabled streaming CDN, which represents the Base plane, and a set of distributed components providing collaborative playbacks, which represents the Collaborative plane. At the client side, the system centers on Java-based applications and applets which interface the cooperative groups of users. COMODIN uses a protocol to let a group of users to coordinate and control the playback of a common multimedia streaming session. In a shared session, if a control command (e.g. play/pause) is issued by a user, upon acceptance by the control server all users experience the same change on the shared session. The protocol is cooperative in the sense that clients try not to hinder each other. If a client detects a control request issued by another client, it will not issue further requests until it receives a reply from the server. Therefore, the focus of COMODIN is not on content delivery but on the control. Significant applications featured by the COMODIN system include Collaborative Learning on-Demand and Distributed Virtual Theaters.
Coral
[www.coralcdn.org] Coral is a free, peer-to-peer content distribution network designed to mirror web content. Coral is designed to use the bandwidth of volunteers to avoid slashdotting and to reduce the load on websites and other web content providers in general. To use CoralCDN, a content publisher has to append “.nyud.net:8090” to the hostname in a URL. Clients are redirected to the nearby Coral Web caches transparently through DNS redirection. Coral Web caches cooperate to transfer data from nearby peers whenever possible, minimizing both the load on the origin Web server and the latency perceived by the user. Coral allows nodes to nearby cached objects without redundantly querying more distant nodes. It also prevents the creation of hotspots even under degenerate loads. Coral uses an indexing abstraction called Distributed Sloppy Hash Table (DSHT), which is a variant of Distributed Hash Tables (DHTs). Performance measurements of Coral demonstrate that it allows under-provisioned Web sites to achieve dramatically higher capacity, and its clustering provides quantitatively better performance than locality-unaware systems.
During beta testing, the Coral node network is hosted on PlanetLab, a large scale distributed research network of 400 servers, instead of third party volunteer systems. Of those 400 servers, about 275 are currently running Coral. The source code is freely available under the terms of the GNU GPL.
Globule
[www.globule.org] Globule is an open-source collaborative content delivery network developed at the Vrije Universiteit in Amsterdam. Globule aims to allow Web content providers to organize together and operate their own world-wide hosting platform. It provides replication of content, monitoring of servers and redirecting of client requests to available replicas.
In Globule, a site is defined as a collection of documents that belong to one specific user (the site’s owner) and a server is a process running on a machine connected to a network, which executes an instance of the Globule software. Each server is capable of hosting one or more sites and to deliver content to the clients. Globule takes inter-node latency as the proximity measure. This metric is used to optimally place replicas to the clients, and to redirect the clients to an appropriate replica server.
Globule is implemented as a third-party module for the Apache HTTP Server that allows any given server to replicate its documents to other Globule servers. To replicate content, content providers only need to compile an extra module into their Apache server and edit a simple configuration file. Globule automatically replicates content and redirects clients to a nearby replica. This can improve the site's performance, maintain the site available to its clients even if some servers are down, and to a certain extent help to resist to flash crowds and the Slashdot effect. Globule is available for public use under open source license.
FCAN FCAN (Flash Crowds Alleviation Network) is an adaptive CDN network that dynamically optimizes the system structure between peer-to-peer (P2P) and client-server(C/S) configurations as a possible way to alleviate flash crowds effect. FCAN constructs P2P overlay on cache proxy layer to distribute the flash traffic from origin web server. It uses policy-configured DNS redirection to route the client requests in balance, and adopts strategy load detection to monitor and react the load changes
Publications on Content Delivery Networks
Being an emerging area, Content Networks draw the attention of Book authors, researchers and publishers. Numerous number of books, research papers can be found on related areas of Content networks. Out of all those publications, only few are included which have been found useful*.
*According to the opinion of site editor
Books
A list of books on CDNs and related fields is availablehere.
Research Papers
A list of research papers on CDNs and related fields is availablehere.
RFCs
A list of RFCs on CDNs and related fields is availablehere.
Useful Links
A Taxonomy and Survey of Content Delivery Networks
Wiki Pages;http://en.wikipedia.org/wiki/Content_Delivery_Network
Content Delivery and Distribution Networks (www.web-caching.com/cdns.html )
Peer-to-Peer Content Distribution (www.cs.cmu.edu/~kunwadee/research/p2p)
CDN and Application Distribution (http://www.research.ibm.com/cdn)
Tech Talk-Content Delivery Networks (CDNs): How They Work and How They're Used?
Feedback
Please direct any suggestion and/or comment to the site editor:Mukaddim Pathan
____________________________________________________________________________________________________________________
Last Update: 06 November, 2007