Bleeding edge
|
In computer science, bleeding edge is a term that refers to technology that is so new (and thus, presumably, not perfected) that the user is required to risk reductions in stability and productivity in order to use it. It also refers to the nature of the latest technology to be extremely expensive.
The term is formed as an allusion to "leading edge" and its synonym cutting edge, but implying a greater degree of risk: the "bleeding edge" is in front of the "cutting edge". A technology may be considered bleeding edge under the following conditions:
- Lack of consensus - competing ways of doing some new thing exist and no one really knows for sure which way the market is going to go.
- Lack of knowledge - organizations are trying to implement a new technology or product that the trade journals haven't even started talking about yet, either pro or con.
- Industry resistance to change - trade journals and industry leaders have spoken against a new technology or product but some organizations are trying to implement it anyway because they are convinced it is technically superior.
The rewards for successful early adoption of new technologies can be great, unfortunately, the penalties for 'betting on the wrong horse' or choosing the wrong product are equally large. Whenever an organization decides to take a chance on bleeding edge technology there is a good chance that they will be stuck with a white elephant or worse.
Note: Recently, the term bleeding edge has been increasingly used by the general public to mean "ahead of cutting edge" largely without the negative, risk-associated connotation concurrent with the term's use in more specific fields.