WEB ARCHIVE

Web Archive File Open Web Archive Converter Web Archive Viewer Web Archive Single File Web Archive Format File Extension Web Archive Firefox File Extension Web Archive Converter How to Archive a Web Page




Cloud:

| Web Archive File Open | Web Archive Converter | Web Archive Viewer | Web Archive Single File | Web Archive Format | File Extension Web Archive Firefox | File Extension Web Archive Converter | How to Archive a Web Page |

| Web_Archive | Web_ARChive | Web.archive.org | Wayback_Machine | Web_application_archive | Webarchive | UK_Government_Web_Archive | Portuguese_Web_Archive | Pandora_Web_Archive | Web_Archives_(file_format) | Deep_Web/Archive_1 | UK_Web_Archiving_Consortium | World_Wide_Web | PADICAT | Australia | List_of_Billboard_Year-End_number-one_singles_and_albums | National_Library_of_Israel | Archive_site | 2010_in_Ireland | 2010_Philadelphia_Phillies_season | 2008_in_Irish_music | Web_crawler | ARIA_Music_Awards | 2010_Pakistan_floods | Canberra | 2009_in_Ireland | Barack_Obama | Barack_Obama_presidential_primary_campaign,_2008 | 2008_Greek_riots | 2008_Chinese_milk_scandal | Avatar_(2009_film) | 2009_in_Irish_music | List_of_terrorist_incidents,_2000 | 2008_Pittsburgh_Steelers_season | ARC_(file_format) | Apple_Inc. | Saab_JAS_39_Gripen | Library_and_Archival_Exhibitions_on_the_Web | Belgium | BBC | 2007_Texas_Longhorns_football_team | Egyptian_Revolution_of_2011 | 2008_Texas_Longhorns_football_team | Sci_Fiction | 2010_in_Irish_music | Amsterdam |

  1. Kit MacFarlane - Web archive of published film, television, and media criticism and commentary by the Australian critic.
  2. UK Web Archiving Consortium - Creating an archive of culturally significant UK websites.
  3. The Nutrition and Food Web Archive - Offers free nutrition and food resources for nutrition professionals and students.
  4. The Register: Britain's Web Presence to be Saved - Announcement of the creation of the UK Web Archiving Consortium (UKWAC).
  5. UK Web Archiving Consortium - Aimed at the broad research community and is systematically attempting to create an archive of social, historic and culturally significant web-based material from the UK domain.
  6. MIGenWeb Archives - Sanilac county historical records available for genealogical research.
  7. MIGenWeb Archives - Osceola county historical records available for genealogical research.
  8. MIGenWeb Archives - Roscommon county historical records available for genealogical research.
  9. Central West Scrappers - Web archive for email list. Features galleries of members layouts and tips.
  10. The Lied and Song Texts Page - Free web archive of many texts (lyrics) to Lieder and other classical art songs in more than a dozen languages. Also translations to English included.
  11. asco-o - Web archived ASCII art mailing list.
  12. Thanks for Nuthin - Single panel cartoon with resources to comic strips, animation, and new media on the web. Archive of past panels. By Brad Fitzpatrick.
  13. Shadow Island Games - Provides information on Olympia, a large-scale strategic fantasy simulation. Features a brief introduction to the game, along with turn reports and web archives [United States].
  14. Cabinet Office: Web Archive - Details of initiatives and programmes that have been completed or abandoned, that have be held for reference without further updates.
  15. Coyot's Jokes Web Archive - Český archív humoru a vtipů.
  16. The Victoria Gazette - Online version of monthly newspaper strives to be similar to the hardcopy version, and puts many articles in their entirety up on the Web. Archive of back issues available.
  17. Pandora Archive - Australia's Web archive, established initially by the National Library of Australia, and now built in collaboration with nine other Australian libraries and cultural collecting organisations.
  18. Archive-It.org - A subscription service from the Internet Archive, which allows institutions to build, manage and search their own web archive. Includes the sites of universities, libraries, and special interest collections of websites.
  19. The Library of Virginia - Governor Mark R. Warner Administration Web Archive - 2002–2006 archives include the web sites of the Governor’s Office, his initiatives and his cabinet secretaries along with the web sites of the First Lady, the Lieutenant Governor and the Attorney General.
  20. Hessischer Volleyballverband e.V. - Bietet aktuelle Informationen zum Spielbetrieb in ganz Hessen, sowie eine Geschäftsstelle, ein Web-Archiv und eine Suchmaschine.


  21. [ Link Deletion Request ]

    web archive blank web web archive www http web web archive file internet web archive way back machine web archive web archive org wayback web archive web archive form submit web



    Web archiving


    Web archiving is the process of collecting portions of the World Wide Web to ensure the information is preserved in an archive for future researchers, historians, and the public. Web archivists typically employ web crawlers for automated capture due to the massive size and amount of information on the Web. The largest web archiving organization based on a bulk crawling approach is the Internet Archive which strives to maintain an archive of the entire Web. National libraries, national archives and various consortia of organizations are also involved in archiving culturally important Web content. Commercial web archiving software and services are also available to organizations who need to archive their own web content for corporate heritage, regulatory, or legal purposes.


    Web archive Collecting the web


    Web archivists generally archive various types of web content including HTML web pages, style sheets, JavaScript, images, and video. They also archive metadata about the collected resources such as access time, MIME type, and content length. This metadata is useful in establishing authenticity and provenance of the archived collection.


    Web archive Methods of collection



    Web archive Remote harvesting

    The most common web archiving technique uses web crawlers to automate the process of collecting web pages. Web crawlers typically access web pages in the same manner that users with a browser see the Web, and therefore provide a comparatively simple method of remote harvesting web content. Examples of web crawlers used for web archiving include:

    On-demand

    There are numerous services that may be used to archive web resources "on-demand", using web crawling techniques.

    Free services open for public use
    • Archive.is – Free service which saves a page and all of its images. It can save Web 2.0 pages. Snapshots can be searched by URL wildcards.
    • ArchiveBay – Allows users to submit a URL to be archived in the form of a screenshot. Users can browse all saved screenshots of a URL.
    • freezePAGE snapshots – Free or subscription service. To preserve snapshots, requires login every thirty days for unregistered users, sixty days for registered users.[1]
    • megalodon.jp – Japanese on-demand archiver similar to WebCite. Snapshots can be searched by exact URL.
    • mummify.it - An Y Combinator-backed startup. It allows users to create personal collections of archived snapshots. Only 50 on-demand archives can be taken for free.
    • perma.cc – Service by the Harvard Law School Library in its early stage As of September 2013. Archived pages are saved for two years then have to be manually renewed.[2]
    • Peeep.us – A free website featuring saving page contents from the view of the user who created the link (e.g. link inaccessible for external view but will upon submitting). If unsubscribed, you cannot delete your saved pages. Peeep.us has no search function. Unlike other on-demand archivers, peeep.us takes pages from the browser (by executing a JavaScript-snippet in it) not from the web server. It allows to archive personalized or password-protected pages but also it gives the opportunity to upload any content to peeep.us. This makes peeep.us not a reliable source of evidences.[3]
    • textmirror.net on-demand archiver using Lynx behind the scene. It saves only text from the web page, ignoring images and styles. It ignores robots.txt.
    • WebCite – Free service specifically for scholarly authors, journal editors and publishers to permanently archive and retrieve cited Internet references.[4] Snapshots can be searched by exact URL.
    Enterprise and subscription services
    • Aleph Archives, offers web archiving services for regulatory compliance and eDiscovery aimed to corporate (Global 500 market), legal and government industries.
    • Archive-It, a subscription service which allows institutions to build, manage and search their own web archive.
    • Archivethe.net, a shared web-archiving platform operated by Internet Memory Research, the spin-off of the Internet memory foundation (formerly European Archive Foundation). IM Website
    • Compliance WatchDog by SiteQuest Technologies, a subscription service that archives websites and allows users to browse the site as it appeared in the past. It also monitors sites for changes and alerts compliance personnel if a change is detected.
    • Hanzo Archives, provides web archiving, cloud archiving, and social media archiving software and services for e-discovery, information management, social enterprise content, Financial Industry Regulatory Authority, United States Securities and Exchange Commission, and Food and Drug Administration compliance, and corporate heritage. Hanzo is used by leading organizations in many industries, and national governmental institutions. Web archive access is on-demand in native format, and includes full-text search, annotations, redaction, archive policy and temporal browsing. Hanzo is integrated with leading electronic discovery applications and services.
    • Iterasi, Provides enterprise web archiving for compliance, litigation protection, e-discovery and brand heritage. For enterprise companies, financial organizations, government agencies and more.
    • Nextpoint, offers an automated cloud-based, SaaS for marketing, compliance and litigation related needs including electronic discovery
    • Patrina creates custom data management solutions for businesses needing to archive all electronically stored information (ESI) to satisfy record keeping guidelines of FINRA, FDA, SEC, SOX, FRCP and recent Dodd-Frank & CFTC regulations.
    • PageFreezer, a subscription SaaS service to archive, replay and search websites, blogs, web 2.0, Flash & social media for marketing, eDiscovery and regulatory compliance with U.S. Food and Drug Administration (FDA), Financial Industry Regulatory Authority (FINRA), U.S. Securities and Exchange Commission, Sarbanes–Oxley Act Federal Rules of Evidence and records management laws. Archives can be used as legal evidence.
    • Reed Archives, offers litigation protection, regulatory compliance & eDiscovery in the corporate, legal and government industries.
    • SiteReplay, a subscription service. Captures screen-shots of pages, transactions and user journeys using "actual browsers" for regulatory compliance. Screen-shots can be viewed online.
    • Smarsh Web Archiving is designed to capture, preserve and re-create the Web experience as it existed at any moment in time for e-discovery and regulatory compliance obligations. (Smarsh acquired Perpetually in May 2012)
    • The Web Archiving Service is a subscription service optimized for the academic environment guided by input from librarians, archivists and researchers. WAS provides topical browsing, change comparison and site-by-site control of capture settings and frequency. Developed and hosted by the University of California Curation Center at the California Digital Library.
    • webEchoFS, offers a subscription service that was created exclusively to meet the needs of Financial Services companies subject advertising regulations associated with FINRA and the Investment Advisors Act.
    • Website-Archive, a subscription service. Captures screen-shots of pages, transactions and user journeys using "actual browsers". Screen-shots can be viewed online. Uses Cloud Testing technology.

    Web archive Database archiving

    Database archiving refers to methods for archiving the underlying content of database-driven websites. It typically requires the extraction of the database content into a standard schema, often using XML. Once stored in that standard format, the archived content of multiple databases can then be made available using a single access system. This approach is exemplified by the DeepArc and Xinq tools developed by the Bibliothèque nationale de France and the National Library of Australia respectively. DeepArc enables the structure of a relational database to be mapped to an XML schema, and the content exported into an XML document. Xinq then allows that content to be delivered online. Although the original layout and behavior of the website cannot be preserved exactly, Xinq does allow the basic querying and retrieval functionality to be replicated.


    Web archive Transactional archiving

    Transactional archiving is an event-driven approach, which collects the actual transactions which take place between a web server and a web browser. It is primarily used as a means of preserving evidence of the content which was actually viewed on a particular website, on a given date. This may be particularly important for organizations which need to comply with legal or regulatory requirements for disclosing and retaining information.

    A transactional archiving system typically operates by intercepting every HTTP request to, and response from, the web server, filtering each response to eliminate duplicate content, and permanently storing the responses as bitstreams. A transactional archiving system requires the installation of software on the web server, and cannot therefore be used to collect content from a remote website.


    Web archive Difficulties and limitations



    Web archive Crawlers

    Web archives which rely on web crawling as their primary means of collecting the Web are influenced by the difficulties of web crawling:

    • The robots exclusion protocol may request crawlers not access portions of a website. Some web archivists may ignore the request and crawl those portions anyway.
    • Large portions of a web site may be hidden in the deep Web. For example, the results page behind a web form lies in the deep Web because most crawlers cannot follow a link to the results page.
    • Crawler traps (e.g., calendars) may cause a crawler to download an infinite number of pages, so crawlers are usually configured to limit the number of dynamic pages they crawl.

    However, it is important to note that a native format web archive, i.e., a fully browsable web archive, with working links, media, etc., is only really possible using crawler technology.

    The Web is so large that crawling a significant portion of it takes a large amount of technical resources. The Web is changing so fast that portions of a website may change before a crawler has even finished crawling it.


    Web archive General limitations

    • Some web servers are configured to return different pages to web archiver requests than they would in response to regular browser requests. This is typically done to fool search engines into directing more user traffic to a website, and is often done to avoid accountability, or to provide enhanced content only to those browsers that can display it.

    Not only must web archivists deal with the technical challenges of web archiving, they must also contend with intellectual property laws. Peter Lyman[5] states that "although the Web is popularly regarded as a public domain resource, it is copyrighted; thus, archivists have no legal right to copy the Web". However national libraries in many countries do have a legal right to copy portions of the web under an extension of a legal deposit.

    Some private non-profit web archives that are made publicly accessible like WebCite, the Internet Archive or Internet memory allow content owners to hide or remove archived content that they do not want the public to have access to. Other web archives are only accessible from certain locations or have regulated usage. WebCite cites a recent lawsuit against Google's caching, which Google won.[6]


    Web archive Aspects of web curation


    Web curation, like any digital curation, entails:

    • Certification of the trustworthiness and integrity of the collection content
    • Collecting verifiable Web assets
    • Providing Web asset search and retrieval
    • Semantic and ontological continuity and comparability of the collection content

    Thus, besides the discussion on methods of collecting the Web, those of providing access, certification, and organizing must be included. There are a set of popular tools that addresses these curation steps:

    A suite of tools for Web Curation by International Internet Preservation Consortium:

    Other open source tools for manipulating web archives:

    • WARC Tools - for creating, reading, parsing and manipulating, web archives programmatically
    • Search Tools[dead link] - for indexing and searching full-text and metadata within web archives

    Free but not open source tools also exists:

    • WSDK - WARC Software Developement Kit (WSDK) represents a set of simple, compact, and highly optimized Erlang modules to manipulate (create/read/write) the WARC ISO 28500:2009 file format.

    Web archive See also



    Web archive References


    1. ^ FAQ FreezePage.com.
    2. ^ http://perma.cc/about
    3. ^ For example, existed page and forged page
    4. ^ Eysenbach and Trudel (2005).
    5. ^ Lyman (2002)
    6. ^ FAQ Webcitation.org
    7. ^ "Web Curator Tool". Webcurator.sourceforge.net. Retrieved 2011-12-10. 
    Bibliography

    Web archive External links




    Web Archive File Open Web Archive Converter Web Archive Viewer Web Archive Single File Web Archive Format File Extension Web Archive Firefox File Extension Web Archive Converter How to Archive a Web Page

    | Web Archive File Open | Web Archive Converter | Web Archive Viewer | Web Archive Single File | Web Archive Format | File Extension Web Archive Firefox | File Extension Web Archive Converter | How to Archive a Web Page | Web_Archive | Web_ARChive | Web.archive.org | Wayback_Machine | Web_application_archive | Webarchive | UK_Government_Web_Archive | Portuguese_Web_Archive | Pandora_Web_Archive | Web_Archives_(file_format) | Deep_Web/Archive_1 | UK_Web_Archiving_Consortium | World_Wide_Web | PADICAT | Australia | List_of_Billboard_Year-End_number-one_singles_and_albums | National_Library_of_Israel | Archive_site | 2010_in_Ireland | 2010_Philadelphia_Phillies_season | 2008_in_Irish_music | Web_crawler | ARIA_Music_Awards | 2010_Pakistan_floods | Canberra | 2009_in_Ireland | Barack_Obama | Barack_Obama_presidential_primary_campaign,_2008 | 2008_Greek_riots | 2008_Chinese_milk_scandal | Avatar_(2009_film) | 2009_in_Irish_music | List_of_terrorist_incidents,_2000 | 2008_Pittsburgh_Steelers_season | ARC_(file_format) | Apple_Inc. | Saab_JAS_39_Gripen | Library_and_Archival_Exhibitions_on_the_Web | Belgium | BBC | 2007_Texas_Longhorns_football_team | Egyptian_Revolution_of_2011 | 2008_Texas_Longhorns_football_team | Sci_Fiction | 2010_in_Irish_music | Amsterdam

    Copyright:
    Dieser Artikel basiert auf dem Artikel http://en.wikipedia.org/wiki/Web_archive aus der freien Enzyklopaedie http://en.wikipedia.org bzw. http://www.wikipedia.org und steht unter der Doppellizenz GNU-Lizenz fuer freie Dokumentation und Creative Commons CC-BY-SA 3.0 Unported. In der Wikipedia ist eine Liste der Autoren unter http://en.wikipedia.org/w/index.php?title=Web_archive&action=history verfuegbar. Alle Angaben ohne Gewähr.

    Dieser Artikel enthält u.U. Inhalte aus dmoz.org : Help build the largest human-edited directory on the web. Suggest a Site - Open Directory Project - Become an Editor






    Search: deutsch english español français русский

    | deutsch | english | español | français | русский |




    [ Privacy Policy ] [ Link Deletion Request ] [ Imprint ]