The use of many complex technologies has been either much less or essentially different than what were the great expectations when development started. Based on an analysis about the reasons for past failures, we propose three rules for sensible network design process to avoid useless development efforts. First, the analysis of customer needs has to concentrate on practical uses that are likely to become everyday routines. Secondly, the development of a new technology must be based on welldefined, carefully selected core principles. Thirdly, during the development process the real experiences in real networks must be continuously taken into account.
Aspiration toward new capabilities and detailed control
Network evolution based on selfish behavior
Networking industry and research communities have devoted extensive efforts to elaborate technologies and services, most notably Asynchronous Transfer Mode (ATM) and Quality of Service (QoS) with negligible impact on the services offered to real customers. Why did rational persons and respectable companies, in the first place, decide to start those efforts and then to extend them for years and years? Those who make investment decisions about the future networks and services need to understand the reasons behind these recurrent phenomena.
While the total number of scientific papers addressing ATM or QoS clearly exceeds 10,000, only a few papers are dealing with the obvious failure of these technologies. Andrew Odlyzko (2000a) appears to be the first author who has seriously considered the issue publicly. He stated that the main reason behind the failure of ATM was that ATM was designed for longlived flows with welldefined bandwidth and QoS requirements, whereas in reality most flows are small and do not require any specific bandwidth or QoS. Michael Welzl (2004) stressed the problem that endtoend QoS requires everything between the end points to fulfill the same service guarantees including routers, links, and physical layers. These requirements are extremely difficult to realize in heterogeneous networks.
In addition to technical reasons, there also are economic reasons for the failure of QoS. Clark, et al. (2002) stated that any QoS means operational and management cost while there is no guarantee of increased revenues. Burgstahler, et al. (2003) came to a similar conclusion: the price of advanced technologies, like Fiber Distributed Data Interface (FDDI), Token Ring and ATM, has been too high compared to the business benefits they are able to offer.
Finally, the whole development process of comprehensive technologies is problematic. Gregory Bell (2003) emphasized the broken or attenuated feedback loop between network operations and research, which often leads to an unacceptable level of complexity in new technologies. Henning Schulzrinne (2004) addressed these issues relative to protocol and network design.
This paper concentrates primarily on the evolution of network services. However, most of the remarks made here are general and can be applied to various other topics as well, even outside the area of communications. These issues, as discussed in this paper, are related to our intrinsic behavior and way of thinking. Hence we are inclined to repeat the same errors over and over again, even though the fundamental problems of networks with complex service models are evident for anyone who takes a serious look at the history of communications.
Therefore, first we have to consider the reasons that have repeatedly led to similar undesirable outcomes. Secondly, we have to propose some practical ways and rules to obtain more useful outcomes. Both the exploration of the reasons and the answers we propose consist of three parts: fundamental human needs, attractiveness of capabilities, and selfish evolution. To achieve a practical design for a new network or service, the problems related to all three parts must be appropriately solved. The aim of this paper is to offer practical advice to achieve that objective.
The history of networking technology is full of great triumphs and miserable failures. ATM as an enduser service with a rich set of QoS features is an example of a grand failure. The use of ATM technology or QoS mechanisms for traffic engineering or network management purposes is another issue, but is not discussed here. Although it may now seem that real enduser needs were ignored when the basic ATM design was resolved, the reality was quite different. As an example, Carsten Rasmussen (1990) expressed his opinion about the need of broadband as follows:"... if someone invents a service, that is really interesting for private users, the market could suddenly explode. What is such a service? Really active choice of entertainment such as Dial up your favorite Fellini, Get your grandchildren right into your living room, or a multimedia encyclopedia, where a subject is demonstrated optimally on a combination of words, sound and interactive video."
The history of networking technology is full of great triumphs and miserable failures.
This way of thinking could be part of a reasonable strategy for a service provider. In general, the need for video distribution was regularly presented as the key justification for ATM. Particularly, video on demand trials were popular among large network operators and research organizations in the mid1990s. But then came the Internet, provided multimedia encyclopedias and other information resources and, indeed, exploded the markets. In contrast, although the current access speeds make it possible to provide video services with reasonable quality, streaming video still represents a negligible share of the total value of Internet services. Instead, it has turned out that the most valuable uses for an alwayson network connection are:
- connectivity to others by email, news groups, Internet Relay Chat (IRC), blogs and other tools;
- connectivity to information by using Web browsing; and,
- file sharing by means of peertopeer applications.
These applications do not benefit from the sophisticated features provided by ATM or other similar technologies because they are able to adapt to differing transmission capabilities without any significant problems. Thus the main advantage of the Internet is the easytouse connectivity to everywhere with a reasonable price. Odlyzko (2000b) discussed this topic extensively in his seminal paper in which he noted that"Historically, the willingness to pay for entertainment and similar broadcast information has been low compared to that of spontaneous pointtopoint communication."
For instance, in United States in 1997 the total revenue of the telephone sector was $256 billion compared to the total revenues of $63 billion in the motion pictures sector and $52 billion in the TV and radio broadcast industry (Odlyzko, 2000b). The same pattern can be found even when compared the delivery of letters and newspapers. Regardless of the historical evidence, the need for streaming applications is again and again presented as the main driver for future network business. The actual problem is the assumption that new networks have to be designed specifically for video applications.
Even after reading and seriously thinking about historical facts, video streaming continues to be curiously attractive. We may assume a simple questionnaire, like "How attractive (on a scale from one to five) is a video service offering your favorite movies to you whenever you want?" Most probably, a majority of answers will be either four or five, because, of course, everyone wants to see their favorite movie.
What are the conclusions we are allowed to draw from those questionnaires? More specifically, what is the exact question the respondents are inclined to answer? In case of typical questionnaire, the question actually answered might be of the following form: Which one of the situations is more attractive for you, x or y? It may even be that the comparison is made between "watching a movie" and "having nothing special to do," in which case the replies are obvious. In a somewhat more relevant case x is something special and not yet available, like "watch your favorite movie while traveling," and y is an everyday event, like "make a voice call with a member of your family." In this case the special event will most likely be deemed more attractive than the everyday event.
However, we doubt that average users are willing to spend more money per month watching their favorite movies via a new media than telephoning family members. There are many other issues, other than pure attractiveness, to be considered, like
- How often are you willing to watch your favorite movies?
- Where and through which media do you want to watch them?
- Which old activity are you ready to give up because you will be watching more movies?
- Is there any essential change that will alter your current behavior?
The number of favorite movies is limited, and you likely prefer to watch them in an enjoyable environment through highquality media, and only when you have large blocks of troublefree time. With less valuable movies the last two questions are critical, because most of us have something else to do than watch trifling movies. Therefore, the business opportunity for any new media for movies is extremely challenging when these issues are taken into account. Then if we make similar questions about an everyday service, like "How often you are willing to call your family;" or, "Which is your favorite media for those calls?," it is apparent that the overall result looks a lot better from the perspective of a mobile operator.
If someone wants to prove that a new application provides a great business opportunity, the right approach is to seek practical uses that are likely to become everyday routines for a majority of customers.
Similar considerations are valid with many other applications like streaming audio, video telephone, and Multimedia Message Service (MMS). Do you want to immediately send pictures from a holiday trip to your closest friends via MMS? Likely yes, but how often do those events occur in reality, and which are the alternative ways to display images? If you are likely to send MMS only on some special occasions, it will never become an everyday routine. On the contrary, because the use of Short Message Service (SMS) is an everyday custom, the number of text messages will remain much larger than that of multimedia messages, unless some MMS use transforms into everyday usage. Moreover, it is important to remember that the popularity of the application itself has a strong positive effect on the usefulness of the application, as discussed by Kilkki and Kalervo (2004).
If someone wants to prove that a new application provides a great business opportunity, the right approach is not to find the most attractive use cases for the application, but rather to seek practical uses that are likely to become everyday routines for a majority of customers. Those everyday uses will be much more important for a business than any fancy but rare event.
Aspiration toward new capabilities and detailed control
The attractiveness of video streaming is an example of a much broader phenomenon. Steven Talbott (1999) expressed his insight into the general attractiveness of new items:"The implication [of a narrow focus] is that technical developments almost carry their own justification. There are no considerations that go beyond the obvious desirability of specific capabilities. A capability, merely considered as such, will always seem better to have than not to have."
The capability may as well be a new appliance or service, or a new feature in a current appliance or service. With all of these it seems that most of us are inclined to consider any new capability as desirable. Network services and networked appliances is an area in which the desire for new capabilities is particularly strong and may easily lead to the phenomenon called "creeping featurism" (Norman, 1998). But is there any real problem if each capability can somehow be considered useful? David Isenberg and David Weinberger (2004) came to the following conclusion:"Designing a network that is intelligently tuned (optimized) for a particular type of data or service such as TV or financial transactions inevitably makes that network less open. As software engineers say, Today’s optimization is tomorrow’s bottleneck."
Almost always it is easier to envisage the main advantage of a new capability than to think all the possible drawbacks; the advantage is what the capability is, in the first place, designed for, whereas the drawbacks are often obscure and disperse here and there. Besides, a major drawback of a new network capability may materialize only when a totally new use is invented. How could the inventor of the capability be able to assess the significance of that drawback before the new use is even invented?
One specific area in which the desirability of capabilities seems to be irresistible is the control mechanisms for network services. We can take whatsoever networking technology, either in the early phase of development or in the phase of widescale use, and immediately notice an amazing activity centered on inventing new or improved control methods. It is always possible to show that any new control capability might be helpful just by limiting the scope of the study fittingly. The limited scope also makes it possible to achieve some seemingly concrete and relevant results even in a small research project. The area of network performance and quality of service is a perfect choice in this sense, provided that all the annoying problems of fuzzy reality are ignored.
Of course, there is no means to predict the significance of all individual problems generated in the future. In the best case we have some reasonable guess about the total size of these problems. The main issue to be considered here is how that assumed importance of non-specified problems can be appropriately taken into account.
"More control leads to better service" seems to be the dominant paradigm. Yet, there are a diversity of paradigms among various sectors of communication. Paal Engelstad addressed this issue in the field of digital communications by using the concepts of scientific paradigm and core principles. According to Engelstad (2000):
- the main hard core principle of telecommunication is that the network should provide centralized provision of network services at the granularity of each user session, while
- the main principle of the hard core of computer communication is that the network should accommodate connectivity between hostbased applications.
These two core principles have led to dissimilar technical constructions, namely ATM and Internet Protocol (IP). There are some legitimate technical reasons behind the differences, because voice, video, and data have different quality and quantity requirements. Moreover, the telephone business models differ from data business models. However, in addition to these issues there has been an essential difference in the relationship between design process and implementation. ATM was largely designed before any real use of the technology, whereas IP has traditionally been developed together with real implementations (Clark, et al., 2004). Consequently, the IP community has been more practical while ATM community has been more theoretically oriented.
In a way, mathematics is the ultimate sector of control, since everything in mathematics is either correct and totally controllable or not mathematical at all. Although Gödel’s incompleteness theorems from 1931 were able to upset this ideal picture, those with a strong mathematical background tend to prefer strictly controlled systems. In a case of elegant mathematical proof for the advantages of new capability, it is important to remember, more than ever, that the specific advantage of a new capability is only a part of the whole picture, whereas the other part with less desirable results is mostly out of the reach of pure mathematics and easy to underestimate.
In a way, mathematics is the ultimate sector of control.
Because of all kinds of background issues, the approach adopted by a community gradually tends to become the sole truth, or paradigm, and that paradigm is applied even in cases where reasons that led to the paradigm are no longer valid. Particularly, if the paradigm favors detailed control, as is the case with ATM, there is almost nothing to prevent the technology from evolving toward extreme complexity. Even when the paradigm tries to limit the trend toward complexity, as IP does, there will never be a lack of honest proposals to improve the technology by just adding a nice feature or two.
What can be done to conquer this natural course toward extreme complexity? Our answer is the use of welldefined core principles. It is important to understand and accept the fact that only the core principles are able to recognize the importance of all of the undesirable effects that are hard or even impossible to identify beforehand by individual researchers. The meaning of a core principle is not only that it is obeyed when there is no specific reason to do something else, quite the opposite; the core principles must not ever be broken for whatsoever reason. We should not underestimate the benefit of the certainty that everyone else designing other parts of the technology will follow the same core principles.
A wide scale use of a technology forms a strong justification for the relevance of selected core principles. In contrast, if a technology is not used in reality as planned in spite of a serious development effort, that is a strong indication that there is something wrong with the core principles: the principles were not clear enough, the community did not comply with them, or some of the principles were unfeasible.
Once again, we may compare IP and ATM. The most fundamental contrast between IP and ATM is that IP offers connectivity to a wide network whereas ATM offers specific connections through the network at the granularity of each user session. In short, ATM provides much more detailed traffic control than IP. From this perspective detailed control does not seem to be a good core principle, especially for a network in which data represents the majority of traffic.
Our studies about the usefulness of communication services indicate that a small set of basic features is able to provide a great majority of benefits (Pohjola and Kilkki, 2004). Any additional feature, for instance a QoS mechanism, typically provides only a marginal gain that is easily lost due to the additional efforts needed to manage and operate the network. Therefore, although it may seem reasonable to develop a new capability, the following matters must be carefully considered:
- Better service for a set of users does not imply better user experience in general, because better service for someone usually means worse or no service for some other users.
- Better technical performance does not directly imply better business performance, because the relationship between transmitted bytes and operator profit varies vastly from case to case.
- Complex methods to boost performance or service quality are likely to generate complex interactions that may seriously complicate network management, and actually cancel out any positive effects.
As a summary, the core of the problem discussed here is the narrow scope that is used to assess the value of new capabilities. The solution to the problem is to carefully define the core principles for the technology. The main task of the principles is to take care of all the important issues that are missed from the narrow scope, and to keep the development effort coherent and understandable.
Network evolution based on selfish behavior
There are plenty of players who together define how communication networks and services are evolving in reality and then there are those who would like to decide how the networks should be developed. All the players including customers, service providers, and scholars have their own interests. Every player, particularly with an economic objective, is likely to behave selfishly while the common good is pursued only if the common good also happens to be in the selfinterest of the player, or a mechanism is able to convert the common good to the individual benefit of the majority of players. Legislation or binding contracts between key players could form such a converting mechanism. But even in case of common rules, the real process toward the common good often is slow and unsure, if the selfinterest of some players is essentially against the rules.
Novelties will find their way to the real Internet only if they serve the interest of network operators and service providers, or if the novelty does not need any action from them.
In the case of a global network there is a very slim possibility of reaching any common agreement that would be able to contend against the selfinterest of network operators and service providers. The Internet Engineering Task Force ( IETF) surely is a global entity, but its main role is to provide common agreements about the possible ways to accomplish something by networks based on IP standards. On the contrary, IETF has no power to impose how network operators shall use their equipment. Although some regional organizations and public authorities may regulate network operators, their main role is to restrict undesirable use of the Internet rather than to stimulate specific novel technologies or applications. Novelties will find their way to the real Internet only if they serve the interest of network operators and service providers, or if the novelty does not need any action from them.
Service providers are incessantly searching ways to attain a competitive edge against their main competitors. We may even argue that if common good is achieved by a certain technology only when all network operators are using it, any operator has quite a weak motivation to implement it, because the competition will likely nullify any business benefits obtainable from the technology. Kevin Jeffay (2004) has expressed this as follows: "Making one’s network look the same as another’s is not a recipe for success for an ISP [Internet Service Provider]." In that sense, a technology or application that immediately provides a competitive edge against competitors is attractive and will likely be utilized in real networks, even when it works against the common good.
Similar observations can be made about pricing. Although the optimal pricing (with respect to the common good) seems to be some kind of complex congestion pricing, in a competitive environment the reality leads inevitably to a different outcome. In general, the common good as such should not be considered as the pure criterion for optimization, as was pointed out by Shenker, et al. (1996). They argued that:
- Marginal cost prices may not produce sufficient revenue to fully recover costs;
- Congestion costs are inherently inaccessible to the network; and,
- There are other, more structural, goals besides optimality.
They proposed that an architectural or structural viewpoint might provide the basis for a more realistic pricing design. In addition, it is possible that competitors and customers create conflicting pressures, which may gradually lead to a pricing model that cannot be justified by any optimization analysis.
A pricing model can be seen as one of the core principles of any communication network, although it usually materializes only after the network is used in reality for a while. When a dominant pricing model is established as a result of selfish actions, that model will be very difficult to change unless the change is in the interest of a great majority of players (which apparently requires some significant change in the environment). When the core principles related to separate aspects become mature, the overall situation becomes stable because the principles are usually bound together and cannot be changed separately. For instance, connectionless forwarding, best effort service, and flat rate pricing form that kind of cluster of principles for the Internet.
We may also argue that pricing is directed toward simpler pricing models regardless of how many benefits the complex models claim to provide, as Odlyzko (2004) has exquisitely analyzed. A complex pricing model seems to survive only in a monopolistic situation, and even then customer preferences cause a strong pressure to simplify the overall pricing model. As a summary, although the price discrimination is a popular approach, it seems to hardly ever work in reality.
These general ideas may also explain why research funded by public sources tends to develop all the great ideas and technologies that seem to ideally serve the common good, but that finally are ignored in reality. Because scholars tend to focus on the common good rather than the selfinterest of individual players, some research may ignore the effects of a "selfish" reality. In many cases, research and analysis may be much easier when those issues are ignored. Thus the flow of bona fide proposals to improve any major technology continues forever. But these proposals do not indicate any real change in network technology, as far as there is no mechanism to bring these ideas into realistic development and implementation.
Schulzrinne (2004) presented a hypothesis that "The success of a technology is inversely proportional to the number of papers published before its popularity." This, indeed, appears to be a valid hypothesis if we look the number of papers published about ATM and Ethernet shown in Figure 1. In 1994 there were eight times more papers about ATM than about Ethernet, although ATM was still in a trial phase whereas Ethernet was widely adopted in real networks. Further, in 1994, there were 69 abstracts of papers mentioning both ATM and quality of service and only one paper mentioning both Ethernet and quality of service (and also ATM was mentioned in that abstract). Thus, the appearance of a particularly large number of papers about QoS of a network technology before the real use of the technology is not a promising sign.
Figure 1: Number of papers per year with ATM or Ethernet in the abstract,
data from IEEE Xplore (2004) (estimated values for 2004).
Then we can find even more revealing examples related to services. Only a couple of papers mention weblogs, while any search engine is able to find tens of millions of these pages. Similarly, peertopeer is just gaining momentum in at least some of the formal literature although it is hugely popular in reality. Popular networks and services often seem technically inferior or scientifically uninteresting compared to the great constructions like ATM and QoS. Moreover, many practical aspects are discussed very rarely: such a common term in business analysis as Average Revenue Per User (ARPU) is mentioned only in one abstract in the whole IEEE database with one million papers (IEEE Xplore, 2004).
From this perspective, the most realistic approach is to promote an interactive development process. In that process every new feature must pass two strict tests: first, the feature must comply with the core principles of the technology, and second, key stakeholders must accept the feature and obtain wide scale operational experience. Only after passing both tests, the feature might be accepted as a part of the standard. Further, only a limited number of new features can be tested at the same time. Note also that network operators need to have a clear justification for testing even a small feature, because they do not want to take any unnecessary risk that may endanger their existing business.
Actually, the Internet demonstrates the benefits of interactive network evolution. We may compare Internet services against the anticipated future with ATM services (switched connections with strict control and complex pricing), and be sure that the current state of affairs is more beneficial for great majority of users. General System for Mobile communications (GSM) is almost the only recent example of a successful development of a network technology before the actual use of the technology. In case of GSM, human needs were well enough understood (need for mobile voice), the core principles of technology were clear and commonly accepted (connectionoriented service for voice), and all key stakeholders had sufficient motivation to develop appropriate networks and services.
The GSM success story will be difficult to repeat. It will be particularly difficult if the design represents a long leap from currently used technologies, or if the design tries to provide multiple dissimilar services for different purposes. Multiservice, in the sense that a special service is offered for each separate use, automatically means difficulties in understanding diverse human needs, in defining core principles, and easily generates conflicting interests between different players. Thus, in spite of the attractiveness of the idea, a multiservice network appears to be an almost unachievable goal not as a technology but as a basis for real services.
The main obstacles preventing practical network design are based on a lack of understanding three fundamental issues: real human needs, disadvantages of excessive capabilities, and the effect of selfish behavior on network evolution. Because all of these are deeply tied to our way of thinking, we need strict, deliberate methods and procedures to avoid repeating old errors that we have made in the past. Our proposal to solve this difficulty consists of the following three rules:
- When the value of a new application or service is assessed, we concentrate on practical uses that are likely to become everyday routines for the majority of customers, instead of seeking special cases with the utmost attractiveness. Network technology should be able to efficiently support those routine services.
- The development of a new technology must be based on welldefined, carefully selected core principles, and those principles must always be honored regardless of all attractions to deviate from these principles. Some of the core principles must be able to limit the innate trend toward extreme complexity.
- When a current technology is developed further, we must look for methods and mechanisms that serve both the interest of key stakeholders and the common good. The only way to carry out this is to keenly take into account experiences in real networks during the development of a given technology.
As to the list of core principles, simplicity and realism are essential. Simple service or simple devices with realistic scope are usually able to offer a superior user experience compared to a complex, multipurpose service or device. Moreover, in case of a complex service model, it is almost inevitable that some of the features are not in the interest of some key player in the chain of service provision. This phenomenon leads to a simple outcome even when the theoretical starting point was much more complex. Thus it is better to start with a reasonably simple model to save excessive development efforts. The history of communications strongly indicates that if and only if the above mentioned three rules are complied with, a network or service can become be a success story.
About the author
Kalevi Kilkki is a Principal Scientist at Nokia Research Center, Helsinki, Finland. He received Master of Science and Doctor of Technology degrees from Helsinki University of Technology in 1983 and 1995, respectively. Before joining Nokia he worked as at Telecom Finland and at Helsinki University of Technology. During the period of 199899 he worked in Boston, Mass. and was involved in the development of Differentiated Services specifications at IETF. The main outcome of this activity was a book about Differentiated Services for the Internet (Indianapolis, Ind.: Macmillan Technical Publishing, 1999).
Email: kalevi [dot] kilkki [at] nokia [dot] com.
G. Bell, 2003. "Failure to Thrive: QoS and the Culture of Operational Networking," Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS: What have we learned, why do we care? (Karlsruhe, Germany), pp. 115120.
L. Burgstahler, K. Dolzer, C. Hauser, J. Jähnert, S. Junghans, C. Macián, and W. Payer, 2003. "Beyond technology: the missing pieces for QoS success," Proceedings of the ACM SIGCOMM workshop on Revisiting IP QoS: What have we learned, why do we care? (Karlsruhe, Germany), pp. 121130.
D. Clark, K. Sollins, J. Wroclawski, D. Katabi, J. Kulik, X. Yang, R. Braden, T. Faber, A. Falk, V. Pingali, M. Handley, and N. Chiappa, 2004. "New Arch: Future Generation Internet Architecture," Defense Advanced Research Projects Agency (DoD) Information Technology Office (ITO), at http://www.isi.edu/newarch/iDOCS/final.finalreport.pdf.D.D. Clark, J. Wroclawski, K.R. Sollins, and R. Braden, 2002. "Tussle in Cyberspace: Defining Tomorrow’s Internet," Proceedings of the 2002 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications (Pittsburgh, Pa.), pp. 347356.
P. Engelstad, 2000. "Scientific Structures and Research Programs in Digital Communication," at http://www.unik.no/~paalee/publications/Pardigms_in_dig_comm3.pdf, accessed 24 November 2004.
IEEE Xplore, 2004. "Database for IEEE and IEE journals and conference papers," at http://ieeexplore.ieee.org/Xplore/DynWel.jsp, accessed 24 November 2004.
D. Isenberg and D. Weinberger, 2004. "The paradox of the best network," at http://netparadox.com/, accessed 24 November 2004.
K. Jeffay, 2004. "The Internet of 2024 ... is the Internet of 2004," presentation at the panel discussion at 12th IEEE International Conference on Network Protocols (ICNP), Berlin, Germany.
K. Kilkki and M. Kalervo, 2004. "KKlaw for group forming services," Proceedings of the International Symposium on Services and Local Access (ISSLS) 2004 (Edinburgh, Scotland).
D.A. Norman, 1998. The invisible computer: Why good products can fail, the personal computer is so complex, and information appliances are the solution. Cambridge, Mass.: MIT Press.
A.M. Odlyzko, 2004. "Pricing and architecture of the Internet: Historical perspectives from telecommunications and transportation," Proceedings of the 32nd Research Conference on Communication, Information and Internet Policy (TPRC), Arlington, Virginia, at http://www.dtc.umn.edu/~odlyzko/doc/pricing.architecture.pdf.
A.M. Odlyzko, 2000a. "The Internet and other networks: Utilization rates and their implications," Information Economics & Policy, volume 12, pp. 341365, and at http://www.dtc.umn.edu/~odlyzko/doc/internet.rates.pdf.
A.M. Odlyzko, 2000b. "The history of communications and its implications for the Internet," at http://www.dtc.umn.edu/~odlyzko/doc/history.communications0.pdf, accessed 26 November 2004.
O.P. Pohjola and K. Kilkki, 2004. "An economic approach to analyze and design mobile services," Proceedings of Networks 2004 (Vienna, Austria), pp. 333338, and http://holistic.nokia.com/, accessed 31 December 2004.
C. Rasmussen, 1990. "Asynchronous transfer mode, why and how," Proceedings of the 9th Nordic Teletraffic Seminar (Kjeller, Norway).
H. Schulzrinne, 2004. "Protocol and Network Design Challenges for a ConsumerGrade Internet," presentation at 12th IEEE International Conference on Network Protocols (ICNP), Berlin, Germany.
S. Shenker, D.D. Clark, D. Estrin, and S. Herzog, 1996. "Pricing in Computer Networks: Reshaping the Research Agenda," ACM Computer Communication Review, volume 26, pp. 1943.
S. Talbott, 1999. "The Fascination with Ubiquitous Control," Netfuture, Issue number 95, at http://www.praxagora.com/stevet/netfuture/1999/Sep2399_95.html#2d, accessed 26 November 2004.
M. Welzl, 2004. "A Case for Middleware to Enable Advanced Internet Services," Proceedings of Next Generation Network Middleware Workshop (NGNM’04), Athens, Greece, pp. 20/120/5.
Paper received 29 November 2004; accepted 9 December 2004.
HTML markup: Edward J. Valauskas; Editor: Edward J. Valauskas.
Copyright ©2005, First Monday
Copyright ©2005, Kalevi Kilkki
Sensible design principles for new networks and services
by Kalevi Kilkki
First Monday, volume 10, number 1 (January 2005),