Color Flyers Cheap Woodbury MN
Tri Fold Brochure Printing in Woodbury MN
Digital printing in Minnesota has been a door opener for many businesses. Because printers sell the same thing as everyone else, everyone tries to claim that their service, quality and price are better than others. For this reason, every printer has to find something that would separate them from everyone else. And some business owners find that they have increased productivity after using digital technology and short run processes. Somehow, these gains can be credited to a combination of better pricing and more efficient press performance. Let’s say you have greeting cards that need to be printed. Obsolete inventory through the use of short run digital press can be eliminated.
Tri Fold Brochure Printing in Woodbury MN
This is because with this technology you can print only the needed cards, thus, resulting to orders printed in the exact quantity required. But just the same this kind of printing system is not for everyone. There are risks and changes that need to be dealt with. Nevertheless, the printing industry will continue to change and improve in the years to come. Thus, all business owners and companies have to do is to determine whether this certain printing technique is what they need.
Digital Color Prints vs Color Copies
Front and back of Canon PowerShot A95, a typical pocket-size digital camera Hasselblad 503CW with Ixpress V96C digital back, an example of a professional digital camera system Nikon D810 A digital camera or digicam is a camera that produces digital images that can be stored in a computer, displayed on a screen and printed. Most cameras sold today are digital, and digital cameras are incorporated into many devices ranging from PDAs and mobile phones (called camera phones) to vehicles. Digital and movie cameras share an optical system, typically using a lens with a variable diaphragm to focus light onto an image pickup device. The diaphragm and shutter admit the correct amount of light to the imager, just as with film but the image pickup device is electronic rather than chemical. However, unlike film cameras, digital cameras can display images on a screen immediately after being recorded, and store and delete images from memory. Many digital cameras can also record moving videos with sound. Some digital cameras can crop and stitch pictures and perform other elementary image editing. Further information: History of the camera § Digital cameras, Digital single-lens reflex camera, and Camera phone The history of the digital camera began with Eugene F. Lally of the Jet Propulsion Laboratory, who was thinking about how to use a mosaic photosensor to capture digital images. His 1961 idea was to take pictures of the planets and stars while travelling through space to give information about the astronauts' position. As with Texas Instruments employee Willis Adcock's filmless camera (US patent 4,057,830) in 1972, the technology had yet to catch up with the concept. Steven Sasson as an engineer at Eastman Kodak invented and built the first electronic camera using a charge-coupled device image sensor in 1975. Earlier ones used a camera tube; later ones digitized the signal. Early uses were mainly military and scientific; followed by medical and news applications. In 1986, Japanese company Nikon introduced the first digital single-lens reflex (DSLR) camera, the Nikon SVC. In the mid-to-late 1990s, DSLR cameras became common among consumers. By the mid-2000s, DSLR cameras had largely replaced film cameras. In 2000, Sharp introduced the world's first digital camera phone, the J-SH04 J-Phone, in Japan. By the mid-2000s, higher-end cell phones had an integrated digital camera. By the beginning of the 2010s, almost all smartphones had an integrated digital camera. Further information: Image sensor The two major types of digital image sensor are CCD and CMOS. A CCD sensor has one amplifier for all the pixels, while each pixel in a CMOS active-pixel sensor has its own amplifier. Compared to CCDs, CMOS sensors use less power. Cameras with a small sensor use a back-side-illuminated CMOS (BSI-CMOS) sensor. Overall final image quality is more dependent on the image processing capability of the camera, than on sensor type. The resolution of a digital camera is often limited by the image sensor that turns light into discrete signals. The brighter the image at a given point on the sensor, the larger the value that is read for that pixel. Depending on the physical structure of the sensor, a color filter array may be used, which requires demosaicing to recreate a full-color image. The number of pixels in the sensor determines the camera's "pixel count". In a typical sensor, the pixel count is the product of the number of rows and the number of columns. For example, a 1,000 by 1,000 pixel sensor would have 1,000,000 pixels, or 1 megapixel. Image at left has a higher pixel count than the one to the right, but has lower spatial resolution. Final quality of an image depends on all optical transformations in the chain of producing the image. Carl Zeiss points out at the weakest link in an optical chain determines the final image quality. In case of a digital camera, a simplistic way of expressing it is that the lens determines the maximum sharpness of the image while the image sensor determines the maximum resolution. The illustration on the right can be said to compare a lens with very poor sharpness on a camera with high resolution, to a lens with good sharpness on a camera with lower resolution. At the heart of a digital camera is a CCD or a CMOS image sensor. Digital camera, partly disassembled. The lens assembly (bottom right) is partially removed, but the sensor (top right) still captures an image, as seen on the LCD screen (bottom left). Since the first digital backs were introduced, there have been three main methods of capturing the image, each based on the hardware configuration of the sensor and color filters. Single-shot capture systems use either one sensor chip with a Bayer filter mosaic, or three separate image sensors (one each for the primary additive colors red, green, and blue) which are exposed to the same image via a beam splitter (see Three-CCD camera). Multi-shot exposes the sensor to the image in a sequence of three or more openings of the lens aperture. There are several methods of application of the multi-shot technique. The most common originally was to use a single image sensor with three filters passed in front of the sensor in sequence to obtain the additive color information. Another multiple shot method is called Microscanning. This method uses a single sensor chip with a Bayer filter and physically moved the sensor on the focus plane of the lens to construct a higher resolution image than the native resolution of the chip. A third version combined the two methods without a Bayer filter on the chip. The third method is called scanning because the sensor moves across the focal plane much like the sensor of an image scanner. The linear or tri-linear sensors in scanning cameras utilize only a single line of photosensors, or three lines for the three colors. Scanning may be accomplished by moving the sensor (for example, when using color co-site sampling) or by rotating the whole camera. A digital rotating line camera offers images of very high total resolution. The choice of method for a given capture is determined largely by the subject matter. It is usually inappropriate to attempt to capture a subject that moves with anything but a single-shot system. However, the higher color fidelity and larger file sizes and resolutions available with multi-shot and scanning backs make them attractive for commercial photographers working with stationary subjects and large-format photographs. Improvements in single-shot cameras and image file processing at the beginning of the 21st century made single shot cameras almost completely dominant, even in high-end commercial photography. The Bayer arrangement of color filters on the pixel array of an image sensor. Most current consumer digital cameras use a Bayer filter mosaic in combination with an optical anti-aliasing filter to reduce the aliasing due to the reduced sampling of the different primary-color images. A demosaicing algorithm is used to interpolate color information to create a full array of RGB image data. Cameras that use a beam-splitter single-shot 3CCD approach, three-filter multi-shot approach, color co-site sampling or Foveon X3 sensor do not use anti-aliasing filters, nor demosaicing. Firmware in the camera, or a software in a raw converter program such as Adobe Camera Raw, interprets the raw data from the sensor to obtain a full color image, because the RGB color model requires three intensity values for each pixel: one each for the red, green, and blue (other color models, when used, also require three or more values per pixel). A single sensor element cannot simultaneously record these three intensities, and so a color filter array (CFA) must be used to selectively filter a particular color for each pixel. The Bayer filter pattern is a repeating 2x2 mosaic pattern of light filters, with green ones at opposite corners and red and blue in the other two positions. The high proportion of green takes advantage of properties of the human visual system, which determines brightness mostly from green and is far more sensitive to brightness than to hue or saturation. Sometimes a 4-color filter pattern is used, often involving two different hues of green. This provides potentially more accurate color, but requires a slightly more complicated interpolation process. The color intensity values not captured for each pixel can be interpolated from the values of adjacent pixels which represent the color being calculated. Cameras with digital image sensors that are smaller than the typical 35mm film size have a smaller field or angle of view when used with a lens of the same focal length. This is because angle of view is a function of both focal length and the sensor or film size used. The crop factor is relative to the 35mm film format. If a smaller sensor is used, as in most digicams, the field of view is cropped by the sensor to smaller than the 35mm full-frame format's field of view. This narrowing of the field of view may be described as crop factor, a factor by which a longer focal length lens would be needed to get the same field of view on a 35mm film camera. Full-frame digital SLRs utilize a sensor of the same size as a frame of 35mm film. Common values for field of view crop in DSLRs using active pixel sensors include 1.3x for some Canon (APS-H) sensors, 1.5x for Sony APS-C sensors used by Nikon, Pentax and Konica Minolta and for Fujifilm sensors, 1.6 (APS-C) for most Canon sensors, ~1.7x for Sigma's Foveon sensors and 2x for Kodak and Panasonic 4/3-inch sensors currently used by Olympus and Panasonic. Crop factors for non-SLR consumer compact and bridge cameras are larger, frequently 4x or more. Further information: Image sensor format Relative sizes of sensors used in most current digital cameras. Digital cameras come in a wide range of sizes, prices and capabilities. In addition to general purpose digital cameras, specialized cameras including multispectral imaging equipment and astrographs are used for scientific, military, medical and other special purposes. Subcompact with lens assembly retracted Disassembled compact digital camera Compact cameras are intended to be portable (pocketable) and are particularly suitable for casual "snapshots". Many incorporate a retractable lens assembly that provides optical zoom. In most models, an auto actuating lens cover protects the lens from elements. Most ruggedized or water-resistant models do not retract, and most with (superzoom) capability do not retract fully. Compact cameras are usually designed to be easy to use. Almost all include an automatic mode, or "auto mode", which automatically makes all camera settings for the user. Some also have manual controls. Compact digital cameras typically contain a small sensor which trades-off picture quality for compactness and simplicity; images can usually only be stored using lossy compression (JPEG). Most have a built-in flash usually of low power, sufficient for nearby subjects. A few high end compact digital cameras have a hotshoe for connecting to an external flash. Live preview is almost always used to frame the photo on an integrated LCD. In addition to being able to take still photographs almost all compact cameras have the ability to record video. Compacts often have macro capability and zoom lenses, but the zoom range (up to 30x) is generally enough for candid photography but less than is available on bridge cameras (more than 60x), or the interchangeable lenses of DSLR cameras available at a much higher cost. Autofocus systems in compact digital cameras generally are based on a contrast-detection methodology using the image data from the live preview feed of the main imager. Some compact digital cameras use a hybrid autofocus system similar to what is commonly available on DSLRs. Some high end travel compact cameras have 30x optical zoom have full manual control with lens ring, electronic viewfinder, Hybrid Optical Image Stabilization, built-in flash, Full HD 60p, RAW, burst shooting up to 10fps, built-in Wi-Fi with NFC and GPS altogether. Typically, compact digital cameras incorporate a nearly silent leaf shutter into the lens but play a simulated camera sound  for skeuomorphic purposes. For low cost and small size, these cameras typically use image sensor formats with a diagonal between 6 and 11 mm, corresponding to a crop factor between 7 and 4. This gives them weaker low-light performance, greater depth of field, generally closer focusing ability, and smaller components than cameras using larger sensors. Some cameras use a larger sensor including, at the high end, a pricey full-frame sensor compact camera, such as Sony Cyber-shot DSC-RX1, but have capability near that of a DSLR. A variety of additional features are available depending on the model of the camera. Such features include ones such as GPS, compass, barometer and altimeter for above mean sea level or under(water) mean sea level. and some are rugged and waterproof. Starting in 2011, some compact digital cameras can take 3D still photos. These 3D compact stereo cameras can capture 3D panoramic photos with dual lens or even single lens for play back on a 3D TV. In 2013, Sony released two add-on camera models without display, to be used with a smartphone or tablet, controlled by a mobile application via WiFi. Rugged compact cameras typically include protection against submersion, hot and cold conditions, shock and pressure. Terms used to describe such properties include waterproof, freezeproof, heatproof, shockproof and crushproof, respectively. Nearly all major camera manufacturers have at least one product in this category. Some are waterproof to a considerable depth up to 82 feet (27 m); others only 10 feet (3m), but only a few will float. Ruggeds often lack some of the features of ordinary compact camera, but they have video capability and the majority can record sound. Most have image stabilization and built-in flash. Touchscreen LCD and GPS do not work underwater. For more details on this topic, see Action camera. GoPro and other brands offer action cameras which are rugged, small and can be easily attached to helmet, arm, bicycle, etc. Most have wide angle and fixed focus, and can take still pictures and video, typically with sound. The rising popularity of action camera is in line with many people want to share its photos or videos in social media, many competitors of action camera manufacturer means also many options with decreasing price and nowadays bundle sales with its waterproof housing and accessories mounting compatible with GoPro mounting are usual. The 360-degree camera can take picture or video 360 degrees using two lenses back-to-back and shooting at the same time. Some of the cameras are Ricoh Theta S, Nikon Keymission 360 and Samsung Gear 360. Nico360 was launched in 2016 and claimed as the world's smallest 360-degree camera with size 46 x 46 x 28 mm (1.8 x 1.8 x 1.1 in) and price less than $200. With virtual reality mode built-in stitching, Wifi, and Bluetooth, live streaming can be done. Due to it also being water resistant, the Nico360 can be used as action camera. There are tend that action cameras have capabilities to shoot 360 degrees with at least 4K resolution. Sony DSC-H2 Main article: Bridge camera Bridge cameras physically resemble DSLRs, and are sometimes called DSLR-shape or DSLR-like. They provide some similar features but, like compacts, they use a fixed lens and a small sensor. Some compact cameras have also PSAM mode. Most use live preview to frame the image. Their usual autofocus is by the same contrast-detect mechanism as compacts, but many bridge cameras have a manual focus mode and some have a separate focus ring for greater control. Big physical size and small sensor allow superzoom and wide aperture. Bridgcams generally include an image stabilization system to enable longer handheld exposures, sometimes better than DSLR for low light condition. As of 2014, bridge cameras come in two principal classes in terms of sensor size, firstly the more traditional 1/2.3" sensor (as measured by image sensor format) which gives more flexibility in lens design and allows for handholdable zoom from 20 to 24mm (35mm equivalent) wide angle all the way up to over 1000 mm supertele, and secondly a 1" sensor that allows better image quality particularly in low light (higher ISO) but puts greater constraints on lens design, resulting in zoom lenses that stop at 200mm (constant aperture, e.g. Sony RX10) or 400mm (variable aperture, e.g. Panasonic Lumix FZ1000) equivalent, corresponding to an optical zoom factor of roughly 10 to 15. Some bridge cameras have a lens thread to attach accessories such as wide-angle or telephoto converters as well as filters such as UV or Circular Polarizing filter and lens hoods. The scene is composed by viewing the display or the electronic viewfinder (EVF). Most have a slightly longer shutter lag than a DSLR. Many of these cameras can store images in a raw format in addition to supporting JPEG. The majority have a built-in flash, but only a few have a hotshoe. In bright sun, the quality difference between a good compact camera and a digital SLR is minimal but bridge cameras are more portable, cost less and have a greater zoom ability. Thus a bridge camera may better suit outdoor daytime activities, except when seeking professional-quality photos. Olympus OM-D E-M1 Mark II introduced 2016 Main article: Mirrorless interchangeable-lens camera In late 2008, a new type of camera emerged called mirrorless interchangeable-lens camera (MILC), which uses various sensors and offers lens interchangeability. These are simpler and more compact than DSLRs due to not having a lens reflex system. MILC camera models are available with various sensor sizes including: a small 1/2.3 inch sensor, as is commonly used in bridge cameras such as the original Pentax Q (more recent Pentax Q versions have a slightly larger 1/1.7 inch sensor); a 1 inch sensor; a Micro Four Thirds sensor; an APS-C sensor such as the Sony NEX series, Fujifilm X series, Pentax K-01, and Canon EOS M; and some, such as the Sony Alpha 7, use a full frame (35 mm) sensor and even Hasselblad X1D is the first medium format MILC. Disadvantage of MILC over DSLR is battery energy consume due to high energy consume of electronic viewfinder. Olympus and Panasonic released many Micro Four Thirds cameras with interchangeable lenses which are fully compatible each other without any adapter, while the others have proprietary mounts. In 2014, Kodak released its first Micro Four Third system camera. As of March 2014[update], MILC cameras are available which appeal to both amateurs and professionals. While most digital cameras with interchangeable lenses feature a lens-mount of some kind, there are also a number of modular cameras, where the shutter and sensor are incorporated into the lens module. The first such modular camera was the Minolta Dimâge V in 1996, followed by the Minolta Dimâge EX 1500 in 1998 and the Minolta MetaFlash 3D 1500 in 1999. In 2009, Ricoh released the Ricoh GXR modular camera. At CES 2013, Sakar International announced the Polaroid iM1836, an 18 MP camera with 1"-sensor with interchangeable sensor-lens. An adapter for Micro Four Thirds, Nikon and K-mount lenses was planned to ship with the camera. There are also a number of add-on camera modules for smartphones called lens-style cameras (lens camera). They contain all components of a digital camera in a module, but lack a viewfinder, display and most of the controls. Instead they can be mounted to a smartphone and use its display and controls. Lens-style cameras include: Cutaway of an Olympus E-30 DSLR Main article: Digital single-lens reflex camera Digital single-lens reflex cameras (DSLR) use a reflex mirror that can reflect the light and also can swivel from one position to another position and back to initial position. By default, the reflex mirror is set 45 degree from horizontal, blocks the light to the sensor and reflects light from the lens to penta-mirror/prism at the DSLR camera and after some reflections arrives at the viewfinder. The reflex mirror is pulled out horizontally below the penta-mirror/prism when shutter release is fully pressed, so the viewfinder will be dark and the light/image can directly strike the sensor at the time of exposure (speed setting). Autofocus is accomplished using sensors in the mirror box. Some DSLRs have a "live view" mode that allows framing using the screen with image from the sensor. These cameras have much larger sensors than the other types, typically 18 mm to 36 mm on the diagonal (crop factor 2, 1.6, or 1). The larger sensor permits more light to be received by each pixel; this, combined with the relatively large lenses provides superior low-light performance. For the same field of view and the same aperture, a larger sensor gives shallower focus. They use interchangeable lenses for versatility. Usually some lenses are made for digital SLR use only, but recent trend the lenses can also be used in detachable lens video camera with or without adapter. Main article: Sony SLT camera A DSLT uses a fixed translucent mirror instead of a moving reflex mirror as in DSLR. A translucent mirror or transmissive mirror or semi-transparent mirror is a mirror which reflects the light to two things at the same time. It reflects it along the path to a pentaprism/pentamirror which then goes to an optical view finder (OVF) as is done with a reflex mirror in DSLR cameras. The translucent mirror also sends light along a second path to the sensor. The total amount of light is not changed, just some of the light travels one path and some of it travels the other. The consequences are that DSLT cameras should shoot a half stop differently from DSL. One advantage of using a DSLT camera is the blind moments a DSLR user experiences while the reflecting mirror is moved to send the light to the sensor instead of the viewfinder do not exist for DSLT cameras. Because there is no time at which light is not traveling along both paths, DSLT cameras get the benefit of continuous auto-focus tracking. This is especially beneficial for burst mode shooting in low-light conditions and also for tracking when taking video. Until early 2014, only Sony had released DSLT cameras. By March 2014, Sony had released more DSLTs than DSLRs with a relatively complete lenses line-up. Main article: Rangefinder camera § Digital rangefinder A rangefinder is a device to measure subject distance, with the intent to adjust the focus of a camera's objective lens accordingly (open-loop controller). The rangefinder and lens focusing mechanism may or may not be coupled. In common parlance, the term "rangefinder camera" is interpreted very narrowly to denote manual-focus cameras with a visually-read out optical rangefinder based on parallax. Most digital cameras achieve focus through analysis of the image captured by the objective lens and distance estimation, if it is provided at all, is only a byproduct of the focusing process (closed-loop controller). A line-scan camera traditionally has a single row of pixel sensors, instead of a matrix of them. The lines are continuously fed to a computer that joins them to each other and makes an image. This is most commonly done by connecting the camera output to a frame grabber which resides in a PCI slot of an industrial computer. The frame grabber acts to buffer the image and sometimes provide some processing before delivering to the computer software for processing. Multiple rows of sensors may be used to make colored images, or to increase sensitivity by TDI (Time delay and integration). Many industrial applications require a wide field of view. Traditionally maintaining consistent light over large 2D areas is quite difficult. With a line scan camera all that is necessary is to provide even illumination across the “line” currently being viewed by the camera. This makes possible sharp pictures of objects that pass the camera at high speed. Such cameras are also commonly used to make photo finishes, to determine the winner when multiple competitors cross the finishing line at nearly the same time. They can also be used as industrial instruments for analyzing fast processes. Linescan cameras are also extensively used in imaging from satellites (see push broom scanner). In this case the row of sensors is perpendicular to the direction of satellite motion. Linescan cameras are widely used in scanners. In this case, the camera moves horizntally. Further information: Rotating line camera Stand alone cameras can be used as remote camera. One kind weighs 2.31 ounces, with a periscope shape, IPx7 water-resistance and dust-resistance rating and can be enhanced to IPx8 by using a cap. They have no viewfinder or LCD. Lens is a 146 degree wide angle or standard lens, with fixed focus. It can have a microphone and speaker, And it can take photos and video. As a remote camera, a phone app using Android or iOS is needed to send live video, change settings, take photos, or use time lapse. Many devices have a built-in digital camera, including, for example, smartphones, mobile phones, PDAs and laptop computers. Built-in cameras generally store the images in the JPEG file format. Mobile phones incorporating digital cameras were introduced in Japan in 2001 by J-Phone. In 2003 camera phones outsold stand-alone digital cameras, and in 2006 they outsold film and digital stand-alone cameras. Five billion camera phones were sold in five years, and by 2007 more than half of the installed base of all mobile phones were camera phones. Sales of separate cameras peaked in 2008. Sale of smartphones compared to digital cameras 2009-2013 Sales of traditional digital cameras have declined due to the increasing use of smartphones for casual photography, which also enable easier manipulation and sharing of photos through the use of apps and web-based services. "Bridge cameras", in contrast, have held their ground with functionality that most smartphone cameras lack, such as optical zoom and other advanced features. DSLRs have also lost ground to Mirrorless interchangeable-lens camera (MILC)s offering the same sensor size in a smaller camera. A few expensive ones use a full-frame sensor as DSLR professional cameras. In response to the convenience and flexibility of smartphone cameras, some manufacturers produced "smart" digital cameras that combine features of traditional cameras with those of a smartphone. In 2012, Nikon and Samsung released the Coolpix S800c and Galaxy Camera, the first two digital cameras to run the Android operating system. Since this software platform is used in many smartphones, they can integrate with services (such as e-mail attachments, social networks and photo sharing sites) as smartphones do, and use other Android-compatible software as well. In an inversion, some phone makers have introduced smartphones with cameras designed to resemble traditional digital cameras. Nokia released the 808 PureView and Lumia 1020 in 2012 and 2013; the two devices respectively run the Symbian and Windows Phone operating systems, and both include a 41-megapixel camera (along with a camera grip attachment for the latter). Similarly, Samsung introduced the Galaxy S4 Zoom, having a 16-megapixel camera and 10x optical zoom, combining traits from the Galaxy S4 Mini with the Galaxy Camera. Furthermore, Panasonic Lumic DMC-CM1 is an Android KitKat 4.4 smartphone with 20MP, 1" sensor, the largest sensor for a smartphone ever, with Leica fixed lens equivalent of 28mm at F2.8, can take RAW image and 4K video, has 21mm thickness. Light-field cameras were introduced in 2013 with one consumer product and several professional ones. After a big dip of sales in 2012, consumer digital camera sales declined again in 2013 by 36 percent. In 2011, compact digital cameras sold 10 million per month. In 2013, sales fell to about 4 million per month. DSLR and MILC sales also declined in 2013 by 10–15% after almost ten years of double digit growth. Worldwide unit sales of digital cameras is continuously declining from 148 million in 2011 to 58 million in 2015 and tends to decrease more in the following years. Film camera sold got the peak at 36.671 million units in 1997 and digital camera sold began in 1999. In 2008, film camera market was dead and digital camera sold got the peak by 121.463 million units in 2010. In 2002, cell phone with camera has been introduced and in 2003 the cell phone with camera sold 80 million units per year. In 2011 the cell phone with camera sold hundreds of millions per year, when digital camera sold initialized to decline. In 2015, digital camera sold is 35.395 million units or only less than a third of digital camera sold number in a peak and also slightly less than film camera sold number in a peak. Many digital cameras can connect directly to a computer to transfer data:- A common alternative is the use of a card reader which may be capable of reading several types of storage media, as well as high speed transfer of data to the computer. Use of a card reader also avoids draining the camera battery during the download process. An external card reader allows convenient direct access to the images on a collection of storage media. But if only one storage card is in use, moving it back and forth between the camera and the reader can be inconvenient. Many computers have a card reader built in, at least for SD cards. Many modern cameras support the PictBridge standard, which allows them to send data directly to a PictBridge-capable computer printer without the need for a computer. Wireless connectivity can also provide for printing photos without a cable connection. An instant-print camera, is a digital camera with a built-in printer. This confers a similar functionality as an instant camera which uses instant film to quickly generate a physical photograph. Such non-digital cameras were popularized by Polaroid in 1972. Many digital cameras include a video output port. Usually sVideo, it sends a standard-definition video signal to a television, allowing the user to show one picture at a time. Buttons or menus on the camera allow the user to select the photo, advance from one to another, or automatically send a "slide show" to the TV. HDMI has been adopted by many high-end digital camera makers, to show photos in their high-resolution quality on an HDTV. In January 2008, Silicon Image announced a new technology for sending video from mobile devices to a television in digital form. MHL sends pictures as a video stream, up to 1080p resolution, and is compatible with HDMI. Some DVD recorders and television sets can read memory cards used in cameras; alternatively several types of flash card readers have TV output capability. Cameras can be equipped with a varying amount of environmental sealing to provide protection against splashing water, moisture (humidity and fog), dust and sand, or complete waterproofness to a certain depth and for a certain duration. The latter is one of the approaches to allow underwater photography, the other approach being the use of waterproof housings. Many waterproof digital cameras are also shockproof and resistant to low temperatures. Many digital cameras have preset modes for different applications. Within the constraints of correct exposure various parameters can be changed, including exposure, aperture, focusing, light metering, white balance, and equivalent sensitivity. For example, a portrait might use a wider aperture to render the background out of focus, and would seek out and focus on a human face rather than other image content. A CompactFlash (CF) card, one of many media types used to store digital photographs Many camera phones and most stand alone digital cameras store image data in flash memory cards or other removable media. Most stand-alone cameras use SD format, while a few use CompactFlash or other types. In January 2012, a faster XQD card format was announced. In early 2014, some high end cameras have two hot-swapable memory slots. Photographers can swap one of the memory card with camera-on. Each memory slot can accept either Compact Flash or SD Card. All new Sony cameras also have two memory slots, one for its Memory Stick and one for SD Card, but not hot-swapable. A few cameras used other removable storage such as Microdrives (very small hard disk drives), CD single (185 MB), and 3.5" floppy disks. Other unusual formats include: Most manufacturers of digital cameras do not provide drivers and software to allow their cameras to work with Linux or other free software. Still, many cameras use the standard USB storage protocol, and are thus easily usable. Other cameras are supported by the gPhoto project. Main article: Image file formats The Joint Photography Experts Group standard (JPEG) is the most common file format for storing image data. Other file types include Tagged Image File Format (TIFF) and various Raw image formats. Many cameras, especially high-end ones, support a raw image format. A raw image is the unprocessed set of pixel data directly from the camera's sensor, often saved in a proprietary format. Adobe Systems has released the DNG format, a royalty-free raw image format used by at least 10 camera manufacturers. Raw files initially had to be processed in specialized image editing programs, but over time many mainstream editing programs, such as Google's Picasa, have added support for raw images. Rendering to standard images from raw sensor data allows more flexibility in making major adjustments without losing image quality or retaking the picture. Formats for movies are AVI, DV, MPEG, MOV (often containing motion JPEG), WMV, and ASF (basically the same as WMV). Recent formats include MP4, which is based on the QuickTime format and uses newer compression algorithms to allow longer recording times in the same space. Other formats that are used in cameras (but not for pictures) are the Design Rule for Camera Format (DCF), an ISO specification, used in almost all camera since 1998, which defines an internal file structure and naming. Also used is the Digital Print Order Format (DPOF), which dictates what order images are to be printed in and how many copies. The DCF 1998 defines a logical file system with 8.3 filenames and makes the usage of either FAT12, FAT16, FAT32 or exFAT mandatory for its physical layer in order to maximize platform interoperability. Most cameras include Exif data that provides metadata about the picture. Exif data may include aperture, exposure time, focal length, date and time taken, and location. Digital cameras have become smaller over time, resulting in an ongoing need to develop a battery small enough to fit in the camera and yet able to power it for a reasonable length of time. Digital cameras utilize either proprietary or standard consumer batteries. As of March 2014[update], most cameras use proprietary lithium-ion batteries while some use standard AA batteries or primarily use a proprietary Lithium-ion rechargeable battery pack but have an optional AA battery holder available. The most common class of battery used in digital cameras is proprietary battery formats. These are built to a manufacturer's custom specifications. Almost all proprietary batteries are lithium-ion. In addition to being available from the OEM, aftermarket replacement batteries are commonly available for most camera models. Main article: Commercial off-the-shelf Digital cameras that utilize off-the-shelf batteries are typically designed to be able to use both single-use disposable and rechargeable batteries, but not with both types in use at the same time. The most common off-the-shelf battery size used is AA. CR2, CR-V3 batteries, and AAA batteries are also used in some cameras. The CR2 and CR-V3 batteries are lithium based, intended for a single use. Rechargeable RCR-V3 lithium-ion batteries are also available as an alternative to non-rechargeable CR-V3 batteries. Some battery grips for DSLRs come with a separate holder to accommodate AA cells as an external power source. Digital single-lens reflex camera When digital cameras became common, many photographers asked whether their film cameras could be converted to digital. The answer was yes and no. For the majority of 35 mm film cameras the answer is no, the reworking and cost would be too great, especially as lenses have been evolving as well as cameras. For most a conversion to digital, to give enough space for the electronics and allow a liquid crystal display to preview, would require removing the back of the camera and replacing it with a custom built digital unit. Many early professional SLR cameras, such as the Kodak DCS series, were developed from 35 mm film cameras. The technology of the time, however, meant that rather than being digital "backs" the bodies of these cameras were mounted on large, bulky digital units, often bigger than the camera portion itself. These were factory built cameras, however, not aftermarket conversions. A notable exception is the Nikon E2 and Nikon E3, using additional optics to convert the 35mm format to a 2/3 CCD-sensor. A few 35 mm cameras have had digital camera backs made by their manufacturer, Leica being a notable example. Medium format and large format cameras (those using film stock greater than 35 mm), have a low unit production, and typical digital backs for them cost over $10,000. These cameras also tend to be highly modular, with handgrips, film backs, winders, and lenses available separately to fit various needs. The very large sensor these backs use leads to enormous image sizes. For example, Phase One's P45 39 MP image back creates a single TIFF image of size up to 224.6 MB, and even greater pixel counts are available. Medium format digitals such as this are geared more towards studio and portrait photography than their smaller DSLR counterparts; the ISO speed in particular tends to have a maximum of 400, versus 6400 for some DSLR cameras. (Canon EOS-1D Mark IV and Nikon D3S have ISO 12800 plus Hi-3 ISO 102400 with the Canon EOS-1Dx's ISO of 204800) Main article: digital camera back In the industrial and high-end professional photography market, some camera systems use modular (removable) image sensors. For example, some medium format SLR cameras, such as the Mamiya 645D series, allow installation of either a digital camera back or a traditional photographic film back. Linear array cameras are also called scan backs. Most earlier digital camera backs used linear array sensors, moving vertically to digitize the image. Many of them only capture grayscale images. The relatively long exposure times, in the range of seconds or even minutes generally limit scan backs to studio applications, where all aspects of the photographic scene are under the photographer's control. Some other camera backs use CCD arrays similar to typical cameras. These are called single-shot backs. Since it is much easier to manufacture a high-quality linear CCD array with only thousands of pixels than a CCD matrix with millions, very high resolution linear CCD camera backs were available much earlier than their CCD matrix counterparts. For example, you could buy an (albeit expensive) camera back with over 7,000 pixel horizontal resolution in the mid-1990s. However, as of 2004[update], it is still difficult to buy a comparable CCD matrix camera of the same resolution. Rotating line cameras, with about 10,000 color pixels in its sensor line, are able, as of 2005[update], to capture about 120,000 lines during one full 360 degree rotation, thereby creating a single digital image of 1,200 Megapixels. Most modern digital camera backs use CCD or CMOS matrix sensors. The matrix sensor captures the entire image frame at once, instead of incrementing scanning the frame area through the prolonged exposure. For example, Phase One produces a 39 million pixel digital camera back with a 49.1 x 36.8 mm CCD in 2008. This CCD array is a little smaller than a frame of 120 film and much larger than a 35 mm frame (36 x 24 mm). In comparison, consumer digital cameras use arrays ranging from 36 x 24 mm (full frame on high end consumer DSLRs) to 1.28 x 0.96 mm (on camera phones) CMOS sensor.
Digital Printing Press: An Update Banner PrintingColored pencils Color effect – Sunlight shining through stained glass onto carpet (Nasir ol Molk Mosque located in Shiraz, Iran) Colors can appear different depending on their surrounding colors and shapes. The two small squares have exactly the same color, but the right one looks slightly darker. Color (American English) or colour (Commonwealth English) is the characteristic of human visual perception described through color categories, with names such as red, yellow, purple, or blue. This perception of color derives from the stimulation of cone cells in the human eye by electromagnetic radiation in the spectrum of light. Color categories and physical specifications of color are associated with objects through the wavelength of the light that is reflected from them. This reflection is governed by the object's physical properties such as light absorption, emission spectra, etc. By defining a color space, colors can be identified numerically by coordinates. The RGB color space for instance is a color space corresponding to human trichromacy and to the three cone cell types that respond to three bands of light: long wavelengths, peaking near 564–580 nm (red); medium-wavelength, peaking near 534–545 nm (green); and short-wavelength light, near 420–440 nm (blue). There may also be more than three color dimensions in other color spaces, such as in the CMYK color model, wherein one of the dimensions relates to a colour's colorfulness). The photo-receptivity of the "eyes" of other species also varies considerably from that of humans and so results in correspondingly different color perceptions that cannot readily be compared to one another. Honeybees and bumblebees for instance have trichromatic color vision sensitive to ultraviolet (an electromagnetic radiation with a wavelength from 10 nm (30 PHz) to 400 nm (750 THz), shorter than that of visible light but longer than X-rays) but is insensitive to red. Papilio butterflies possess six types of photoreceptors and may have pentachromatic vision. The most complex color vision system in the animal kingdom has been found in stomatopods (such as the mantis shrimp) with up to 12 spectral receptor types thought to work as multiple dichromatic units. The science of color is sometimes called chromatics, colorimetry, or simply color science. It includes the perception of color by the human eye and brain, the origin of color in materials, color theory in art, and the physics of electromagnetic radiation in the visible range (that is, what is commonly referred to simply as light). Continuous optical spectrum rendered into the sRGB color space. Electromagnetic radiation is characterized by its wavelength (or frequency) and its intensity. When the wavelength is within the visible spectrum (the range of wavelengths humans can perceive, approximately from 390 nm to 700 nm), it is known as "visible light". Most light sources emit light at many different wavelengths; a source's spectrum is a distribution giving its intensity at each wavelength. Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary widely among different species, and to a lesser extent among individuals within the same species. In each such class the members are called metamers of the color in question. The familiar colors of the rainbow in the spectrum – named using the Latin word for appearance or apparition by Isaac Newton in 1671 – include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The table at right shows approximate frequencies (in terahertz) and wavelengths (in nanometers) for various pure spectral colors. The wavelengths listed are as measured in air or vacuum (see refractive index). The color table should not be interpreted as a definitive list – the pure spectral colors form a continuous spectrum, and how it is divided into distinct colors linguistically is a matter of culture and historical contingency (although people everywhere have been shown to perceive colors in the same way). A common list identifies six main bands: red, orange, yellow, green, blue, and violet. Newton's conception included a seventh color, indigo, between blue and violet. It is possible that what Newton referred to as blue is nearer to what today is known as cyan, and that indigo was simply the dark blue of the indigo dye that was being imported at the time. The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably; for example, a low-intensity orange-yellow is brown, and a low-intensity yellow-green is olive-green. The color of an object depends on both the physics of the object in its environment and the characteristics of the perceiving eye and brain. Physically, objects can be said to have the color of the light leaving their surfaces, which normally depends on the spectrum of the incident illumination and the reflectance properties of the surface, as well as potentially on the angles of illumination and viewing. Some objects not only reflect light, but also transmit light or emit light themselves, which also contribute to the color. A viewer's perception of the object's color depends not only on the spectrum of the light leaving its surface, but also on a host of contextual cues, so that color differences between objects can be discerned mostly independent of the lighting spectrum, viewing angle, etc. This effect is known as color constancy. The upper disk and the lower disk have exactly the same objective color, and are in identical gray surroundings; based on context differences, humans perceive the squares as having different reflectances, and may interpret the colors as different color categories; see checker shadow illusion. Some generalizations of the physics can be drawn, neglecting perceptual effects for now: To summarize, the color of an object is a complex result of its surface properties, its transmission properties, and its emission properties, all of which contribute to the mix of wavelengths in the light leaving the surface of the object. The perceived color is then further conditioned by the nature of the ambient illumination, and by the color properties of other objects nearby, and via other characteristics of the perceiving eye and brain. When viewed in full size, this image contains about 16 million pixels, each corresponding to a different color on the full set of RGB colors. The human eye can distinguish about 10 million different colors. Main article: Color theory Although Aristotle and other ancient scientists had already written on the nature of light and color vision, it was not until Newton that light was identified as the source of the color sensation. In 1810, Goethe published his comprehensive Theory of Colors in which he ascribed physiological effects to color that are now understood as psychological. In 1801 Thomas Young proposed his trichromatic theory, based on the observation that any color could be matched with a combination of three lights. This theory was later refined by James Clerk Maxwell and Hermann von Helmholtz. As Helmholtz puts it, "the principles of Newton's law of mixture were experimentally confirmed by Maxwell in 1856. Young's theory of color sensations, like so much else that this marvelous investigator achieved in advance of his time, remained unnoticed until Maxwell directed attention to it." At the same time as Helmholtz, Ewald Hering developed the opponent process theory of color, noting that color blindness and afterimages typically come in opponent pairs (red-green, blue-orange, yellow-violet, and black-white). Ultimately these two theories were synthesized in 1957 by Hurvich and Jameson, who showed that retinal processing corresponds to the trichromatic theory, while processing at the level of the lateral geniculate nucleus corresponds to the opponent theory. In 1931, an international group of experts known as the Commission internationale de l'éclairage (CIE) developed a mathematical color model, which mapped out the space of observable colors and assigned a set of three numbers to each. Main article: Color vision Normalized typical human cone cell responses (S, M, and L types) to monochromatic spectral stimuli The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans are trichromatic—the retina contains three types of color receptor cells, or cones. One type, relatively distinct from the other two, is most responsive to light that is perceived as blue or blue-violet, with wavelengths around 450 nm; cones of this type are sometimes called short-wavelength cones, S cones, or blue cones. The other two types are closely related genetically and chemically: middle-wavelength cones, M cones, or green cones are most sensitive to light perceived as green, with wavelengths around 540 nm, while the long-wavelength cones, L cones, or red cones, are most sensitive to light is perceived as greenish yellow, with wavelengths around 570 nm. Light, no matter how complex its composition of wavelengths, is reduced to three color components by the eye. Each cone type adheres to the Principle of Univariance, which is that each cone's output is determined by the amount of light that falls on it over all wavelengths. For each location in the visual field, the three types of cones yield three signals based on the extent to which each is stimulated. These amounts of stimulation are sometimes called tristimulus values. The response curve as a function of wavelength varies for each type of cone. Because the curves overlap, some tristimulus values do not occur for any incoming light combination. For example, it is not possible to stimulate only the mid-wavelength (so-called "green") cones; the other cones will inevitably be stimulated to some degree at the same time. The set of all possible tristimulus values determines the human color space. It has been estimated that humans can distinguish roughly 10 million different colors. The other type of light-sensitive cell in the eye, the rod, has a different response curve. In normal situations, when light is bright enough to strongly stimulate the cones, rods play virtually no role in vision at all. On the other hand, in dim light, the cones are understimulated leaving only the signal from the rods, resulting in a colorless response. (Furthermore, the rods are barely sensitive to light in the "red" range.) In certain conditions of intermediate illumination, the rod response and a weak cone response can together result in color discriminations not accounted for by cone responses alone. These effects, combined, are summarized also in the Kruithof curve, that describes the change of color perception and pleasingness of light as function of temperature and intensity. Main article: Color vision The visual dorsal stream (green) and ventral stream (purple) are shown. The ventral stream is responsible for color perception. While the mechanisms of color vision at the level of the retina are well-described in terms of tristimulus values, color processing after that point is organized differently. A dominant theory of color vision proposes that color information is transmitted out of the eye by three opponent processes, or opponent channels, each constructed from the raw output of the cones: a red–green channel, a blue–yellow channel, and a black–white "luminance" channel. This theory has been supported by neurobiology, and accounts for the structure of our subjective color experience. Specifically, it explains why humans cannot perceive a "reddish green" or "yellowish blue", and it predicts the color wheel: it is the collection of colors for which at least one of the two color channels measures a value at one of its extremes. The exact nature of color perception beyond the processing already described, and indeed the status of color as a feature of the perceived world or rather as a feature of our perception of the world – a type of qualia – is a matter of complex and continuing philosophical dispute. Main article: Color blindness If one or more types of a person's color-sensing cones are missing or less responsive than normal to incoming light, that person can distinguish fewer colors and is said to be color deficient or color blind (though this latter term can be misleading; almost all color deficient individuals can distinguish at least some colors). Some kinds of color deficiency are caused by anomalies in the number or nature of cones in the retina. Others (like central or cortical achromatopsia) are caused by neural anomalies in those parts of the brain where visual processing takes place. Main article: Tetrachromacy While most humans are trichromatic (having three types of color receptors), many animals, known as tetrachromats, have four types. These include some species of spiders, most marsupials, birds, reptiles, and many species of fish. Other species are sensitive to only two axes of color or do not perceive color at all; these are called dichromats and monochromats respectively. A distinction is made between retinal tetrachromacy (having four pigments in cone cells in the retina, compared to three in trichromats) and functional tetrachromacy (having the ability to make enhanced color discriminations based on that retinal difference). As many as half of all women are retinal tetrachromats.:p.256 The phenomenon arises when an individual receives two slightly different copies of the gene for either the medium- or long-wavelength cones, which are carried on the X chromosome. To have two different genes, a person must have two X chromosomes, which is why the phenomenon only occurs in women. There is one scholarly report that confirms the existence of a functional tetrachromat. In certain forms of synesthesia/ideasthesia, perceiving letters and numbers (grapheme–color synesthesia) or hearing musical sounds (music–color synesthesia) will lead to the unusual additional experiences of seeing colors. Behavioral and functional neuroimaging experiments have demonstrated that these color experiences lead to changes in behavioral tasks and lead to increased activation of brain regions involved in color perception, thus demonstrating their reality, and similarity to real color percepts, albeit evoked through a non-standard route. After exposure to strong light in their sensitivity range, photoreceptors of a given type become desensitized. For a few seconds after the light ceases, they will continue to signal less strongly than they otherwise would. Colors observed during that period will appear to lack the color component detected by the desensitized photoreceptors. This effect is responsible for the phenomenon of afterimages, in which the eye may continue to see a bright figure after looking away from it, but in a complementary color. Afterimage effects have also been utilized by artists, including Vincent van Gogh. Main article: Color constancy When an artist uses a limited color palette, the eye tends to compensate by seeing any gray or neutral color as the color which is missing from the color wheel. For example, in a limited palette consisting of red, yellow, black, and white, a mixture of yellow and black will appear as a variety of green, a mixture of red and black will appear as a variety of purple, and pure gray will appear bluish. The trichromatic theory is strictly true when the visual system is in a fixed state of adaptation. In reality, the visual system is constantly adapting to changes in the environment and compares the various colors in a scene to reduce the effects of the illumination. If a scene is illuminated with one light, and then with another, as long as the difference between the light sources stays within a reasonable range, the colors in the scene appear relatively constant to us. This was studied by Edwin Land in the 1970s and led to his retinex theory of color constancy. It should be noted, that both phenomena are readily explained and mathematically modeled with modern theories of chromatic adaptation and color appearance (e.g. CIECAM02, iCAM). There is no need to dismiss the trichromatic theory of vision, but rather it can be enhanced with an understanding of how the visual system adapts to changes in the viewing environment. Main article: Color term See also: Lists of colors and Web colors Colors vary in several different ways, including hue (shades of red, orange, yellow, green, blue, and violet), saturation, brightness, and gloss. Some color words are derived from the name of an object of that color, such as "orange" or "salmon", while others are abstract, like "red". In the 1969 study Basic Color Terms: Their Universality and Evolution, Brent Berlin and Paul Kay describe a pattern in naming "basic" colors (like "red" but not "red-orange" or "dark red" or "blood red", which are "shades" of red). All languages that have two "basic" color names distinguish dark/cool colors from bright/warm colors. The next colors to be distinguished are usually red and then yellow or green. All languages with six "basic" colors include black, white, red, green, blue, and yellow. The pattern holds up to a set of twelve: black, gray, white, pink, red, orange, yellow, green, blue, purple, brown, and azure (distinct from blue in Russian and Italian, but not English). Individual colors have a variety of cultural associations such as national colors (in general described in individual color articles and color symbolism). The field of color psychology attempts to identify the effects of color on human emotion and activity. Chromotherapy is a form of alternative medicine attributed to various Eastern traditions. Colors have different associations in different countries and cultures. Different colors have been demonstrated to have effects on cognition. For example, researchers at the University of Linz in Austria demonstrated that the color red significantly decreases cognitive functioning in men. The CIE 1931 color space chromaticity diagram. The outer curved boundary is the spectral (or monochromatic) locus, with wavelengths shown in nanometers. The colors depicted depend on the color space of the device on which you are viewing the image, and therefore may not be a strictly accurate representation of the color at a particular position, and especially not for monochromatic colors. Most light sources are mixtures of various wavelengths of light. Many such sources can still effectively produce a spectral color, as the eye cannot distinguish them from single-wavelength sources. For example, most computer displays reproduce the spectral color orange as a combination of red and green light; it appears orange because the red and green are mixed in the right proportions to allow the eye's cones to respond the way they do to the spectral color orange. A useful concept in understanding the perceived color of a non-monochromatic light source is the dominant wavelength, which identifies the single wavelength of light that produces a sensation most similar to the light source. Dominant wavelength is roughly akin to hue. There are many color perceptions that by definition cannot be pure spectral colors due to desaturation or because they are purples (mixtures of red and violet light, from opposite ends of the spectrum). Some examples of necessarily non-spectral colors are the achromatic colors (black, gray, and white) and colors such as pink, tan, and magenta. Two different light spectra that have the same effect on the three color receptors in the human eye will be perceived as the same color. They are metamers of that color. This is exemplified by the white light emitted by fluorescent lamps, which typically has a spectrum of a few narrow bands, while daylight has a continuous spectrum. The human eye cannot tell the difference between such light spectra just by looking into the light source, although reflected colors from objects can look different. (This is often exploited; for example, to make fruit or tomatoes look more intensely red.) Similarly, most human color perceptions can be generated by a mixture of three colors called primaries. This is used to reproduce color scenes in photography, printing, television, and other media. There are a number of methods or color spaces for specifying a color in terms of three particular primary colors. Each method has its advantages and disadvantages depending on the particular application. No mixture of colors, however, can produce a response truly identical to that of a spectral color, although one can get close, especially for the longer wavelengths, where the CIE 1931 color space chromaticity diagram has a nearly straight edge. For example, mixing green light (530 nm) and blue light (460 nm) produces cyan light that is slightly desaturated, because response of the red color receptor would be greater to the green and blue light in the mixture than it would be to a pure cyan light at 485 nm that has the same intensity as the mixture of blue and green. Because of this, and because the primaries in color printing systems generally are not pure themselves, the colors reproduced are never perfectly saturated spectral colors, and so spectral colors cannot be matched exactly. However, natural scenes rarely contain fully saturated colors, thus such scenes can usually be approximated well by these systems. The range of colors that can be reproduced with a given color reproduction system is called the gamut. The CIE chromaticity diagram can be used to describe the gamut. Another problem with color reproduction systems is connected with the acquisition devices, like cameras or scanners. The characteristics of the color sensors in the devices are often very far from the characteristics of the receptors in the human eye. In effect, acquisition of colors can be relatively poor if they have special, often very "jagged", spectra caused for example by unusual lighting of the photographed scene. A color reproduction system "tuned" to a human with normal color vision may give very inaccurate results for other observers. The different color response of different devices can be problematic if not properly managed. For color information stored and transferred in digital form, color management techniques, such as those based on ICC profiles, can help to avoid distortions of the reproduced colors. Color management does not circumvent the gamut limitations of particular output devices, but can assist in finding good mapping of input colors into the gamut that can be reproduced. Additive color mixing: combining red and green yields yellow; combining all three primary colors together yields white. Additive color is light created by mixing together light of two or more different colors. Red, green, and blue are the additive primary colors normally used in additive color systems such as projectors and computer terminals. Subtractive color mixing: combining yellow and magenta yields red; combining all three primary colors together yields black Subtractive coloring uses dyes, inks, pigments, or filters to absorb some wavelengths of light and not others. The color that a surface displays comes from the parts of the visible spectrum that are not absorbed and therefore remain visible. Without pigments or dye, fabric fibers, paint base and paper are usually made of particles that scatter white light (all colors) well in all directions. When a pigment or ink is added, wavelengths are absorbed or "subtracted" from white light, so light of another color reaches the eye. If the light is not a pure white source (the case of nearly all forms of artificial lighting), the resulting spectrum will appear a slightly different color. Red paint, viewed under blue light, may appear black. Red paint is red because it scatters only the red components of the spectrum. If red paint is illuminated by blue light, it will be absorbed by the red paint, creating the appearance of a black object. Further information: Structural coloration and Animal coloration Structural colors are colors caused by interference effects rather than by pigments. Color effects are produced when a material is scored with fine parallel lines, formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the color's wavelength. If the microstructures are spaced randomly, light of shorter wavelengths will be scattered preferentially to produce Tyndall effect colors: the blue of the sky (Rayleigh scattering, caused by structures much smaller than the wavelength of light, in this case air molecules), the luster of opals, and the blue of human irises. If the microstructures are aligned in arrays, for example the array of pits in a CD, they behave as a diffraction grating: the grating reflects different wavelengths in different directions due to interference phenomena, separating mixed "white" light into light of different wavelengths. If the structure is one or more thin layers then it will reflect some wavelengths and transmit others, depending on the layers' thickness. Structural color is studied in the field of thin-film optics. A layman's term that describes particularly the most ordered or the most changeable structural colors is iridescence. Structural color is responsible for the blues and greens of the feathers of many birds (the blue jay, for example), as well as certain butterfly wings and beetle shells. Variations in the pattern's spacing often give rise to an iridescent effect, as seen in peacock feathers, soap bubbles, films of oil, and mother of pearl, because the reflected color depends upon the viewing angle. Numerous scientists have carried out research in butterfly wings and beetle shells, including Isaac Newton and Robert Hooke. Since 1942, electron micrography has been used, advancing the development of products that exploit structural color, such as "photonic" cosmetics. Find more aboutColorat Wikipedia's sister projects
Digital color printing • carbonless forms • Large format printing • Minnesota