User-controlled digital environment across devices, places, and times with continuous, variable digital boundaries (2024)

This application is a divisional application and claims priority under 35 U.S.C. § 120 to U.S. application Ser. No. 13/068,942, filed May 24, 2011, entitled “REALITY ALTERNATE”, which is related to and claims the benefit of priority of U.S. Patent Application No. 61/396,644 filed May 28, 2010, entitled “REALITY ALTERNATE,” and U.S. Patent Application No. 61/403,896 filed Sep. 22, 2010, entitled “REALITY ALTERNATE,” the entire contents of application Ser. Nos. 13/068,942, 61/396,644, and 61/403,896, are incorporated herein by reference.

A portion of the material in this patent document is subject to copyright protection under the copyright laws of the United States and of other countries. The owner of the copyright rights has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office publicly available files or records, but otherwise reserves all copyright rights whatsoever.

Just as fiction authors have described alternate worlds in novels, this introduces an Alternate Reality—but provides it as technical innovation. This new Alternate Reality's “world” is named the “Expandaverse” which is a conceptual alteration of the “Universe” name and a conceptual alteration of our current reality. Where our physical “Universe” is considered given and physically fixed, the Expandaverse provides a plurality of human created digital realities that includes a plurality of human created means that may be used simultaneously by individuals, groups, institutions and societies to expand the number and types of digital realities—and may be used to provide continuous expansions of a plurality of Alternate Realities. To create the Expandaverse current known technologies are reorganized and combined with new innovations to repurpose what they accomplish and deliver, collectively turning the Earth and near-space into the equivalent of one large, connected room (herein one or a plurality of “Shared Planetary Life Spaces” or SPLS) with a plurality of new possible human realities and living patterns that may be combined differently, directed differently and controlled differently than our current physical reality.

In some examples of this Alternate Reality, people are more connected remotely, and are less connected to where they are physically present—and means are provided for multiple new types of devices, connections and “digital presence”. In some examples of this Alternate Reality, information on how to succeed is automatically collected during a plurality of activities, optimized and delivered to a plurality of others while they are doing the same types of activities, leading to opportunities for higher rates of personal success and greater economic productivity by adopting the most effective new uses, technologies, devices and systems—and means are provided for this. In some examples of this Alternate Reality individuals may establish multiple identities and profiles, associate groups of identities together, and utilize any of them for earning additional income, owning additional wealth or enjoying life in new ways—and means are provided for this. In some examples of this Alternate Reality, means are enumerated for the evolution of multiple types of independent “governances” (which are separate from nation state governments) that may be trans-border and increasingly augment “governments” in that each “governance” provides means for various new types of collective human successes and living patterns that range from personal sovereignty (within a governance), to economic sovereignties (within a governance), to new types of central authorities (within a governance). In some examples of this Alternate Reality, means (herein including means such as an “Alternate Reality Machine”) are provided for each identity (as described elsewhere) to create and manage a plurality of separate human realities that each provides manageable boundaries that determine the “presence” of that identity, wherein each separate reality may have boundaries such as prioritized interests (to include what is wanted), exclusion filters (to exclude what is not wanted), paywalls (to receive income such as for providing awareness and attention), digital and/or physical protections (to provide security from what is excluded), etc. In some examples of this Alternate Reality, means are provided for one or a plurality of a new type of Utility(ies) that provides a flexible infrastructure such as for this Alternate Reality's remote presence in Shared Planetary Life Spaces, automated delivery of “how to succeed” interactions, multiple personal identities, creation and control of new types of “realities broadcasting,” independent “governances”, and numerous fundamental differences from our current reality. In some examples means are provided for new types of fixed and mobile devices such as “Teleportals” that provide always on “digital presence” in Shared Life Spaces (which includes the Earth and near space), as well as remote control that treats some current networked electronic devices as “subsidiary devices” and provides means for their shared use, perhaps even evolving some toward becoming accessible and useful commodities. In some examples means are provided to control various networked electronic devices and turn them into commodity “subsidiary devices,” enabling more users at lower cost, including more uses of their applications and digital content. In some examples of this Alternate Reality reporting on the success of various choices settings is visible and widely accessible, and the various components and systems of the Expandaverse may have settings saved, reported on, accessed and distributed for copying; it therefore becomes possible for human economic and cultural evolution to gain a new scope and speed for learning, distributing and adopting what is most effective for simultaneously achieving multiple ranges of both individually and collectively chosen goals. In a brief summation of the Expandaverse it is an Alternate Reality and these are just some of the characteristics of its divergent “digital realities,” and its scope or scale are not limited by this or by any description of it.

Unlike fiction, however, this is the engineering of an Alternate Reality in which the know-how for achieving human success and human goals is widely delivered and either provided free or sold commercially. It is as if a successful Alternate Reality can now exist in a world parallel to ours—the Expandaverse as a parallel digital “universe”—and this describes the devices, technology(ies), infrastructure and “platform(s)” that comprise it, which is herein named the Alternate Reality Teleportal Machine (ARTPM). With an ARTPM modern technological civilization gains an engineered dynamic machine (that includes devices, utilities, systems, applications, identities, governances, presences, alternate realities, shared life spaces, machines, etc.) that provides means that range from bottom-up support of individuals; to top-down support of collective groups and their goals; with the results from a plurality of activities tracked, measured and reported visibly. In this Alternate Reality, a plurality of ways that people and groups choose to act are known and visible; along with dynamic guidance and reporting so that a plurality of individuals and groups may see what works and rapidly choose higher levels of personal and economic success, with faster rates of growth toward economic prosperity as well as means for disseminating it. In sum, this Alternate Reality differs from current atomized individual technologies in separate fields by presenting a metamorphosized divergent reality that re-interprets and re-integrates current and new technologies to provide means to build a different type of connected, success-focused, and evolving “world”—an Expandaverse with a range of differences and variations from our own reality.

Just as fiction authors present, the Expandaverse also proposes an alternate history and timeline from our own, which is the same history as ours until a “digital discontinuity” causes a divergence from our history. Like our reality the Expandaverse had an ancient civilizations and the Middle Ages. It also shared the Age of Physical Discovery in which Columbus discovered the “new world” and started the “age of new physical property rights” in which new lands were explored and claimed by the English, Spaniards, Dutch, French and others. Each sent settlers out into their new territories. The first settlers received “land grants” for their own farms and “homesteads”. By moving into these new territories the new settlers were granted new property and rights over their new physical properties. As the Earth became claimed as property everywhere, the physical Earth eventually had all of its physical property owned and controlled. Eventually there was no more “free land” available for granting or taking. Now, when you “move” someplace new its physical properties are already owned and you must buy your physical property from someone else.

In this alternate history, the advent of an Expandaverse provides new “digital realities” that can be created, designed for specific purposes, with parts or all of them owned as new “intellectual property(ies),” then modified and improved with the means to create more digital realities—so a plurality of new forms of digital properties may be created continuously, with some more valuable than others, and with new improvements that may be adopted rapidly from others continuously making some types of digital realities (and their digital properties) more valuable than others. Therefore, due to an ARTPM, new digital properties can be continuously created and owned, and multiple different types of digital realities can be created and owned by each person. In the Expandaverse, digital property (such as intellectual properties) may become acceptable new forms of recognized properties, with systems of digital property rights that may be improved and worked out in that alternate timeline. Because the Expandaverse's new “digital realities” are continuous realities, that intellectual property does not expire (like current intellectual property expires in our Universe) so in the Expandaverse digital property rights are salable and inheritable assets, just as physical property is in the current reality. One of the new components of an Expandaverse is both that new “digital realities” can be created by individuals, corporations, non-profits, governments, etc.; and these realities and their components can be owned, sold, inherited, etc. with the same differences in values and selling prices as physical properties—but with some key differences: Unlike the physical Earth which ran out of new property after the entire planet was claimed and “homesteaded,” the ARTPM's Expandaverse provides continuous economic and lifestyle opportunities to create new “digital properties” that can be created, enjoyed, broadcast, shared, improved and sold. The ability to imagine and to copy others' successes becomes new sources of rapidly expanding personal and group wealth when the ability to turn imagination into assets becomes easier, the ability to spread new digital realities becomes an automated part of the infrastructure, and the ability to monetize new digital properties becomes standardized.

In addition, in some examples one or a plurality of these are entertainment properties which include in some examples traditional entertainment properties that include concepts such as new ARTPM devices or ARTPM technologies (such as novels, movies, video games, television shows, songs, art works, theater, etc.); in some examples traditional entertainment properties to which are added ARTPM components such as a constructed digital reality that fits the world of a specific novel, the world of a specific movie, the world of a specific video game, etc.; and in some examples a new type of entertainment such as RealWorld Entertainment (herein RWE) which blends a fictional reality (such as in some examples the alternate history of the Expandaverse) with the real world. into a new type of entertainment that fits in some examples fictional situations, in some examples real situations, in some examples fictional characters' needs, and in some examples real people's needs.

The literary genre of science fiction was created when authors such as Jules Verne and H. G. Wells reconceptualized the novel as a means for introducing entire worlds containing imagined devices, characters and living patterns that did not exist when they conceived them. Many “novel” concepts conceived by “novelists” have since been turned into numerous patented inventions stemming from their stories in numerous fields like submarines, video communications, geosynchronous satellites, virtual reality, the internet, etc. This takes a parallel but different step with technology itself. Rather than starting by writing a fictional novel, this reconceptualizes current and new technology into an Alternate Reality that includes new combinations, new machines, new devices, new utilities, new communications connections, new “presences”, new information “flows,” new identities, new boundaries, new governances, new realities, etc. that provide an innovative reality-wide machine with technologies that focus on human success and economic abundance. In its largest sense it utilizes digital technologies to reconceptualize reality as under both collective and individual control, and provides multiple means that in combination may achieve that.

An analogy is electricity that flows from standardized wall sockets in nearly every room and public place, so it is now “standard” to plug in a wide range of “standardized” electrical devices, turn them on and use them (as one part of this example, the electric plug that transfers power from a standardized electric power grid is itself numerous inventions with many patents; the simple electric plug did not begin with universal utility and connectivity). Herein, it is a startling idea that human success, remote digital presence (Shared Planetary Life Spaces or SPLS), multiple identities, individually controlled boundaries that define multiple personal realities, new types of governances, and/or myriad opportunities to achieve wider economic prosperity might be “universally delivered” during everyday activities over the “utility(ies)” equivalent to an electric power grid, by standardized means that are equivalents to multiple types of electric plugs. In this Alternate Reality, personal and group success are not just sometimes possible for a few who acquire an education, earn a lot of money and piece together disparate complex products and services. Instead, this Alternate Reality may provide new means to turn the world and near-space into one shared, successful digital room. In that Alternate Reality “room” the prosperity and quality of life of individuals, groups, companies, organizations, societies and economies—right through civilization itself—might be reborn for those at the bottom, expanded for those part-way up the ladder, and opened to new heights for those at the top—while being multiplied for everyone by being delivered in simultaneous multiple versions that are individually modifiable by commonly accessible networks and utility(ies). Given today's large and growing problems such as the intractability of poverty, economic stagnation of the middle-class, short lifetimes that cannot be meaningfully extended, incomes that do not support adequate retirement by the majority, some governments that contain human aspirations rather than achieve them, and other limitations of our current reality, a world that gains the means to become one large, shared and successful room, would unquestionably be an Alternate Reality to ours.

This Alternate Reality shares much with our current reality, including most of our history, along with our underlying principles of physics, chemistry, biology and other sciences—and it also shares our current technologies, devices, networks, methods and systems that have been invented from those sciences. Those are employed herein and their teachings are not repeated. However, this Alternate Reality is based on a reconceptualization of those scientific and technological achievements plus more, so that their net result is a divergent reality whose processes focus more on means to expand humanity's success and satisfaction; with new abilities to transform a plurality of issues, problems and crises on both individual and group levels; along with new opportunities to achieve economic prosperity and abundance.

The components of this Alternate Reality are numerous and substantially different from our reality. One of the major differences is with the way “reality” is viewed today. The current reality is physical and local and it is well-known to everyone—when you walk down a public city street you are present on the street and can see all the people, sidewalks, buildings, stores, cars, streetlights, security cameras—literally everything that is present on the street with you. Similarly, all the people present on that street at that time can see you, and when you are physically close enough to someone else you can also hear each other. Today's digital technologies are implicitly different. Using a telephone, video conference, video call, etc. involves identifying a particular person or group and then contacting that person or group by means such as dialing a phone number, entering a web address, connecting two video conferencing systems at a particular meeting time, making a computer video phone call, etc. Though not explicitly expressed, digital contact implies a conscious and mechanical act of connecting two specific people (or connecting two specific groups in a video conference). Unlike being simultaneously present like in physical reality, making digital contact means reaching out and employing a particular device and communication means to make a contact and have that accepted. Until you attempt this contact and another party accepts it, you do not see and hear others digitally, and those people do not see you or hear you digitally. This is fundamentally different from the ARTPM, one of whose means is expressed herein as Shared Planetary Life Spaces (or SPLS's).

Current devices (which include hardware, software, networks, services, data, entertainment, etc.): The current reality's means for these various types of digital contact, communications and entertainment superficially appear diverse and numerous. A partial list includes mobile phones, wearable digital devices, PCs, laptops, netbooks, tablets, pads, online games, television set-top boxes, “smart” networked televisions, digital video recorders, digital cameras, surveillance cameras, sensors (of many types), web browsers, the web, Web applications, websites, interactive Web content, etc. These numerous different digital devices have separate operating systems, interfaces and networks; different means of use for communications and other tasks; different content types that sometimes overlap with each other (with different interfaces and means for accessing the same types of content); etc. There are so many types and so many products and services in each type that it may appear to be an entire world of differences. When factored down, however, their similarities overwhelm their differences. Many of these different devices provide the same features with different interfaces, media, protocols, networks, operating systems, applications, etc.: They find, open, display, scroll, highlight, link, navigate, use, edit, save, record, play, stop, fast forward, fast reverse, look up, contact, connect, communicate, attach, transmit, disconnect, copy, combine, distribute, redistribute, broadcast, charge, bill, make payments, accept payments, etc. In a current reality that superficially appears to have too many different types of devices and interfaces to ever be made simple and productive, the functional similarities are revealing. This is fundamentally different from the ARTPM which simplifies devices into Teleportals plus networked electronic devices (including some applications and some digital content) that may be remotely controlled and used as “subsidiary devices,” to reduce some types of complexity while increasing productivity at lower costs, by means of a shared and common interface. Again, the Expandaverse's digital reality may turn some electronic devices and some of their uses into the digital equivalent of one simpler connected room.

At a high level this Alternate Reality includes numerous major reversals, divergences and transformations from the current physical reality and its devices, which are described herein: A partial list of current assumptions that are simultaneously reversed or transformed includes:

Realities: FROM one reality TO multiple realities (with multiple identities).

Control over Reality: FROM one reality controls people TO we each choose and control our own multiple identities and each identity's one or multiple digital realities.

Boundaries: FROM invisible and unconscious TO explicit, visible and managed.

Death: FROM one too short life without real life extension, TO horizontal life expansion through multiple identities.

Presence: FROM where you are in a physical location TO everywhere in one or a plurality of digital presences (as one individual or as multiple identities).

Connectedness: FROM separation between people TO always on connections.

Contacts: FROM trying to phone, conference or contact a remote recipient TO always present in a digital Shared Space(s) from your current Device(s) in Use.

Success: FROM you figure it out TO success is delivered by one or a plurality of networks and/or utilities.

Privacy: FROM private TO tracked, aggregated and visible (especially “best choices” so leaping ahead is obvious and normal)—with some types of privacy strengthened because multiple identities also enable private identities and even secret identities.

Ownership of Your Attention: FROM you give it away free TO you can earn money from it (via Paywalls) if you want.

Ownership of Devices and Content: FROM each person buys these TO simplified access and sharing of commodity resources.

Trust: FROM needing protection TO most people are good when instantly identified and classified, with automated protection from others.

Networks: FROM transmission and communications TO identifying, tracking and surfacing behavior and identity(ies).

Network Communications: FROM electronic (web, e-store, email, mobile phone calls, e-shopping/e-catalogs, tweets, social media postings, etc.) TO personal and face-to-face, even if non-local.

Knowledge: FROM static knowledge that must be found and figured out TO active knowledge that finds you and fits your needs to know.

Rapidly Advancing Devices: FROM you're on your own TO two-way assistance.

Buying: FROM selling by push (marketing and sales) and pull (demand) TO interactive during use, based on your current actions, needs and goals.

Culture: FROM one common culture with top-down messages TO we each choose our multiple cultures and set our boundaries (paywalls, priorities [what's in], filters [what's out], protection, etc.) for each of our self-directed realities.

Governances: FROM one set of broad and “we control you” governments TO governments plus choosing your goals and then choosing one or multiple governances that help achieve the goals you want.

Acceptance of limits: FROM we are only what we are TO we each choose large goals and receive two-way support, with multiple new ways to try and have it all (both individually and collectively).

Thus, the current reality starts with physical reality predominant and one-by-one short digital contacts secondary, with numerous different types of devices for many of the same types of functions and content. The “Alternate Reality Teleportal Machine” (ARTPM) enables multiple realities, multiple digital identities, personal choice over boundaries (for multiple types of personal boundaries), with new devices, platforms and infrastructures—and much more.

The ARTPM ultimately begs for fundamental questions: Can we be happier? Significantly better? Much more successful? Able to turn obstacles into achievements? If we can choose our own realities, if we can create realities, if we can redesign realities, if we can surface what succeeds best and distribute and deliver that rapidly worldwide via the everyday infrastructure—in some examples to those who need it, at the time and place they need to succeed—then who or what will we choose to be? What will we want to become next? How long will it be before we choose our dreams and attempt to reach them both individually and collectively?

The ARTPM helps make reality into a do-it-yourself opportunity. It does this by reversing a plurality of current assumptions and shows that in some examples these reversals are substantial. In some examples people are more present remotely than face-to-face, and focus on those remote individuals, groups, places, tools, resources, etc. that are most interesting to them, rather than have a primary focus on the people where they are physically present. In some examples the main purposes of networks and communications are to track and surface behavior and activities, so that networks and various types of remote applications constantly know a great deal about who does what, where, when and how—right down to the level of each individual (though people may have private and secret identities that maintain confidentiality); this is a main part of transforming networks into a new type of utility that does more than provide communications and access to online content and services, and new online components serve individuals (in some examples helping them succeed) by knowing what they are doing, and helping them overcome difficulties. In some examples being tracked, recorded and broadcasted is a normal part of everyday life, and this offers new social and business opportunities; including both personal broadcast opportunities and new types of privacy options. In some examples active knowledge, information and entertainment is delivered where and when needed by individuals (in some examples by an Active Knowledge Machine [AKM], Active Knowledge Interactions [AKI], and contextually appropriate Active Knowledge [AK]), to raise individual success and satisfaction in a plurality of tasks with a plurality of devices (in some examples various everyday products and services) Combined, AKI/AK are designed to raise productivity, outcomes and satisfaction, which raises personal success (both economic and in other ways), and produce a positive impact on broader economic growth such as through an ability to identify and spread the most productive tools and technologies. In addition, Active Knowledge offers new business models and opportunities—in some examples the ability to sell complete lifestyles with packages of products and services that may deliver measurable and even assured levels of personal success and/or satisfaction, or in some examples the ability to provide new types of “governances” whose goals include collective successes, etc. In some examples privacy is not as available for individuals, corporations and institutions; more of what each person does is tracked, recorded and/or reported publicly; but because of these tracked data and interactions, dynamic continuous improvement may be built into a plurality of online capabilities that employ Active Knowledge of both behaviors and results. The devices, systems and abilities to improve continuously, and deliver those capabilities online as new services and/or products, are owned and controlled by a plurality of individuals and independent “governances,” as well as by companies, organizations and governments.

In some examples, various types of Teleportal Devices automatically discover their appropriate connections and are configured automatically for their owner's account(s), identity(ies) and profile(s). Advance or separate knowledge of how to turn on, configure, login and/or use devices, services and new capabilities successfully is reduced substantially by automation and/or delivery of task-based knowledge during installation and use. In addition, an adaptable consistent user interface is provided across Teleportal Devices. In some examples a visible model of “see the best and most successful choices” then “try them and you'll succeed in using them” then “if you fail keep going and you'll be shown how” is available like electricity, as a new type of utility—to enable “fast follower” processes so more may reach the higher levels of success sooner. While the nation state and governments continue, in some examples multiple simultaneous types of “governances” provide options that a plurality of individuals may join, leave, or have different types of associations with multiple governances at one time. Three of a plurality of types of governances are illustrated herein including an IndividualISM in which each member has virtual personal sovereignty and self-control (including in some examples the right to establish a plurality of virtual identities, and own the work, properties, incomes and assets from their multiple identities); a CorporatISM in which one or a group of corporations may sell plans that include targeted levels of personal success (such as an “upward mobility lifestyle”) across a (potentially broad) package of products and services consumption levels (that can include in some examples housing, transportation, financial services, consumer goods, lifelong education, career success, wealth and lifestyle goals, etc.); a WorldISM in which a central governance supports and/or requires a set of values (that may include in some examples environmental practices, beliefs, codes of conduct, etc.) that span national boundaries and are managed centrally; or different types of new and potentially useful types of governances (as may be exemplified by any field of focused interest and activity such as photography, fashion, travel, participating in a sport, a non-mainstream lifestyle such as nudism, a parent's group such as local PTA, a type of charity such as Ronald McDonald Houses, etc.). While life spans are limited by human genetics, in some examples individuals have the equivalent of life extension by being able to enjoy multiple identities (that is, multiple lives) at one time during their one life time. Multiple identities also provide greater freedom and economic independence by using multiple identities that may each own assets, businesses, etc. in addition to a single individual's normal job and salary, or have multiple identities that may be used to try and enjoy multiple lifestyles. Within one's limited life span, multiple identities provide each person the opportunity to experience multiple “lives” (in some examples multiple lifestyles and multiple incomes) where each identity can be created, changed, or eliminated at any time, with the potential for an additional identity(ies) or group of identities to become wealthier, adventurous and/or happier than one's everyday typical wage-earning “self”. In some examples human success is an engineered dynamic process that operates to help a plurality of those who are connected by means of an agnostic infrastructure whose automated and self-improving human success systems range from bottom-up support of individuals who operate independently, to top-down determination and “selling” of collective goals by new types of “Governances” that seek to influence and control groups (in some examples by IndividualISMs, CorporatISMs, WorldISMs, or other types of Governances). In some examples individuals and groups may leap ahead with a visible “fast follower” process: Humanity's status and results in a plurality of areas are reported publicly and visibly so that a plurality of ways that people and groups choose and construct this Alternate Reality are known and visible, including a plurality of their “best” and most successful activities, devices, actions, goals, rates of success, results and satisfaction (that is, more of what we choose, do and achieve is tracked, measured, reported visibly, etc.) so that people may know a plurality of the choices, products, services, etc. work best, and a plurality of individuals and groups may use this reporting. There are direct processes for accessing the same choices, settings, configurations, etc. that produce the “best” successes so that others may copy them, try them and switch to those that work best for them, based on what they want to achieve for themselves, their families, those with whom they enjoy Shared Planetary Life Spaces, etc.

In sum, while today's current reality is the background (including especially physical reality and its networked electronic devices environment), there are substantial alterations in this Alternate Reality. A “human success” Expandaverse parallels fiction by providing technologies from a different reality that operate by different assumptions and principles, yet it is contemporary to our reality in that it describes how to use current and new technology to build this Alternate Reality, contained herein and in various patent applications, including a range of devices and components—together an Alternate Reality Teleportal Machine (ARTPM).

In our current reality and timeline, by 1982 the output per hour worked in the USA had become 10 times the output per hour worked 100 years before (Romer 1990, Maddison 1982). For nearly 200 years economic, scientific and technological advances have produced falling costs, increasing production and scale that has exploded from local to global levels across a plurality of economic areas of creation, production and distribution and a plurality of economies worldwide. Scarcity has been made obsolete for raw materials like rubber and wood as they have been replaced by growing ranges of invented materials such as plastics, polymers and currently emerging nano-materials. Even limited commodities such as energy may yield to abundant sources such as solar, wind and other renewable sources as innovations in these fields may make energy more efficient and abundant. More telling, the knowledge resources and communication networks required to drive progress are advancing because the means to copy and re-use digital bits are transforming numerous industries whose products or operating knowledge may be stored and transmitted as digital bits.

Economic theory is catching up with humanity's historic rise of material, energy, knowledge, digital and other types of abundance. Two of the seminal advances are considered Robert Solow's “A Contribution to the Theory of Economic Growth” (Solow, 1956) and Paul Romer's “Endogenous Technological Change” (Romer 1990). The former three factors of production (land, labor and capital with diminishing returns) have been replaced in economic theory by people (with education and skills), ideas (inventions and advances), and things (traditional inputs and capital). These new factors of production describe an economic growth model that includes accelerating technological change, intellectual property, monopoly rents and a dawning realization that widely advancing prosperity might become possible for most of humanity, not just for some.

The old proverb is being rewritten and it is no longer “Give a man a fish and you feed him for today, but teach a man to fish and you feed him for a lifetime.” Today we can say “reinvent fishing and you might feed the world” and by that mean invent new means of large-scale ocean fishing, reduce by-catch from as much as 50% of total catches to reduce destruction of ocean ecosystems, invent new types of fish farming, reduce external damage from some types of fish farming, improve refrigeration throughout the fish distribution chain, use genetic engineering to create domesticated fish, control overfishing of the oceans, develop hatcheries that multiply fish populations, or invent other ways to improve fishing that have never been considered before—and then deliver those advances to individuals, corporations and governments; and from small groups to societies throughout the global economy. Another way to say this is the more we invent, learn and implement successfully at scale, the more people can produce, contribute and consume abundantly. Comparing the past two decades to the past two centuries to civilization's history before that shows how increasing the returns from knowledge transforms the speed and scale of widespread transformations and economic growth opportunities available.

In spite of our progress, this historic shift from scarcity to abundance has been both unequal and inadequate in its scope and speed. There are inequalities between advanced economies, emerging economies and poor undeveloped countries. In every nation there are also huge income inequalities between those who create this expanding abundance as members of the global economy, and those who do local work at local wages and feel bypassed by this growth of global wealth. In addition, huge problems continue to multiply such as increasingly expensive and scarce energy and fuels, climate change, inadequate public education systems, healthcare for everyone, social security for aging populations, economic systems in turmoil, and other stresses that imply that the current rate of progress may need to be greater in scope and speed, and dynamically self-optimizing so it may become increasingly successful for everyone, including those currently left behind.

This “Alternate Reality Teleportal Machine” (ARTPM) “offers the “Alternate Reality” suggestion that if our goal is widespread human success and economic prosperity, then the three new factors of production are incomplete. A fourth factor—a Teleportal Machine (TPM) with components described herein in some examples, a Teleportal Utility (herein TPU), an Active Knowledge Machine (herein AKM), an Alternate Realities Machine (herein ARM), and much more that is exemplified herein—conceptually remake the world into one successful room, with at least some automated flows of a plurality of knowledge to the “point of need” based on each person's, organization's and society's activities and goals; with tracking and visibility of a plurality of results for continuous improvements. If this new TPM were added to “people, ideas and things” then the new connections and opportunities might actually enable part or more of this Alternate Reality to provide these types of economic and quality of life benefits in our current reality—our opportunities for personal success, personal economic prosperity and many specific advances might be accelerated to a new pace of growth, with new ways that might help replace scarcity with abundance and wider personal success.

To achieve this examples of TPM components—Teleportal Devices (herein TP Devices)—reinvent the window and the “world” which its observers see. Instead of only looking through a wall to the scene outside a room, the window is reinvented as a “Local Teleportal” (LTP, which is a fixed Teleportal) or a “Mobile Teleportal” (MTP, which is a portable Teleportal) that provide two-way connections for every user with the world, and with those who also have a Teleportal Device, along with connections to “Remote Teleportals” (RTP) that provide access to remote locations (herein “Places”) that deliver a plurality of types of real-time and recorded video content from a plurality of locations. This TPM also includes Virtual Teleportals (VTP) which can be on devices like cell phones, PDAs, PCs, laptops, Netbooks, tablets, pads, e-readers, television set-top boxes, “smart” televisions, and other types of devices whether in current use or yet to be developed and turns a plurality of Subsidiary Devices into Alternate Input Devices (herein AIDs)/Alternate Output Devices (herein AODs; together AIDs/AODs). The TPM also includes integrated networks for applications in some examples a Teleportal Shared Space Network (or TPSSN), the ability to run applications of a plurality of types in some examples such as social networking communications or access to multiple types of virtual realities (Teleportal Applications Network or TPAN), personal broadcasting for communicating to groups of various sizes (Teleportal Broadcast Network or TPBN), and connection to various types of devices. The TPM also includes a Teleportal Network (TPN) to integrate a plurality of components and services in some examples Shared Planetary Life Space(s) (herein SPLS), an Alternate Realities Machine (ARM) to manage various boundaries that create these separate realities, and a Teleportal Utility (herein TPU) that enables connections, membership, billing, device addition, configuration, etc. Together and with ARTPM components these enable new types of applications and in some examples is another component, the Active Knowledge Machine (AKM), which adds automated information flows that deliver to users of Teleportal Machines and devices (as defined herein) the knowledge, information and entertainment they need or want at the time and place they need it. Another of some combinatorial examples is the ARM which provides multiple types of filters, protections and paywalls so the prevailing “common” culture is under each person's control with both the ability to exclude what is not wanted, and an optional requirement that each person must be paid for their attention rather than required to provide it for free. Together, this TPM and its components turn each individual and what he or she is doing into a dynamic filter for the “active knowledge,” entertainment and news they want in their lives, so that every person can take larger steps toward the leading edge of human achievement in a plurality of areas, even when they try something they have never done or known before. In this Alternate Reality, human knowledge, attention and achievement are made controlled, dynamic, deliverable and productive. Humanity's knowledge, especially, is no longer static and unuseful until it has been searched for, discovered, deciphered and applied—but instead is turned into a dynamic resource that may increase personal success, prosperity and happiness.

Economic growth research may confirm the potential for this TPM alternative reality. Recent economic research has calculated that the cross-country variation in the rate of technology adoption appears to account for at least one-fourth of per capita income differences (Comin et al, 2007 and 2008). That is, when different countries have different rates of adopting new technologies their economic growth rates are different because new technologies raise the level of productivity, production and consumption to the level of the newer technologies. Thus, the TPM is explicitly designed to harness the potentials for making personal, national and worldwide economic growth actually speed up at a plurality of personal and group economic levels by improving the types of communications that produce higher rates of personal and group successes and thereby economic growth—the production, transmission and use of the ideas and information that improves the outcomes and results that can be achieved from various types of activities and goals.

The history of technology also demonstrates that a new technology may radically transform societies. The development of agriculture was one of the earliest examples, with nomadic humans becoming settled farming cultures. New agricultural surpluses gave rise to the emergence of governments, specialized skills and much more. Similarly, the invention of money altered commerce and trade; and the combination of writing and mathematics altered inventories, architecture, construction, property boundaries and much more. Scientific revolutions like the Renaissance altered our view of the cosmos which in turn changed our understanding of who and what we are. These transformations continue today, with frequent developments in digital technologies like the Internet, communications, and their many new uses. In the Alternate Reality envisioned by the TPM, a plurality of current devices could be employed so individuals could automatically receive the know-how that helps them succeed in their current step, then succeed in their next step, and the step after that, until through a succession of successful steps they and their children may have new opportunities to achieve their lifes' goals. These can also focus some or much of their Active Knowledge Machine deliveries on today's crises such as energy, climate change, supporting aging populations, health care, basic and lifetime education so previously trained generations can adapt to new and faster changes, and more. In addition, the TPU (Teleportal Utility) and TPN (Teleportal Network) provide flexible infrastructure for adding new devices and capabilities as components that automatically deliver AKM know-how and entertainment, based on what each person does and does not want (through their AKM boundaries), across a range of devices and systems.

Some examples of this expanding future include e-paper on product packaging and various devices (such as but not exclusively Teleportal Packaging or TPP); teleportal devices in some examples mobile teleportal devices, wearable glasses, portable projectors, interactive projectors, etc. (such as but not exclusively Mobile Teleportals or MTPs); networking and specialized networks that may include areas like lifetime education or travel (such as but not exclusively Teleportal Networks or TPNs); alert systems for areas like business events, violent crimes or celebrity sightings (such as but not exclusively Teleportal Broadcast and Application Networks TPBANs); personal device awareness for personal knowledge deliveries to one's currently active and preferred devices (such as but not exclusively the Active Knowledge Machine or AKM); etc.

Together, these Alternate Reality Teleportal Machine (ARTPM), including the Active Knowledge Machine (AKM) (as well as the types of future networks and additions described herein) imply that new types of communications may lead to more delivery and use of the best information and ideas that produce individual successes, higher rates of economic growth, and various personal advances in the Quality of Life (QoL). In some examples during the use of devices that require energy, users can receive the best choices to save energy, as well as the know-how and instructions to use them so they actually use less energy—as soon as someone switches to a new device or system that uses less energy, from their initial attempt to use it through their daily uses, they may automatically receive the instructions or know-how to make a plurality of difficult step easier, more successful, etc.

Historically, humanity has seen the most dramatic improvements in its living conditions and economic progress during the most recent two centuries. This centuries-long growth in prosperity flies in the face of economists' dogma about scarcity and diminishing returns that dominated economic theory while the opposite actually occurred. Abundance has grown so powerful that at times it almost seemed to rewrite “Use it up or do without” into “Throw it out or do without.”. With this proven record of wealth expansion, abundance is now the world's strongest compulsion and most individuals' desired economic outcome for themselves and their families. Now as the micro- and macro-concepts of the TPM become clear it prompts the larger question of whether an Alternate Reality with widespread growth toward personal success and prosperity might be explicitly designed and engineered. Can a plurality of factors that produce and deliver an Alternate Reality that identifies and drives advances be specified as an innovation that includes means for new devices, systems, processes, components, elements, etc.? Might an Alternate Reality that explicitly engineers an abundance of human success and prosperity be a new type of technology, devices, systems, utility(ies), presence, and infrastructure(s)?

Social and interpersonal activities create awareness of problems and deliver advances that come from “rubbing elbows.” This is routinely done inside a company, on a university campus, throughout a city's business districts such as a garment district or finance center, in a creative center like Silicon Valley, at conferences in a field like pharmaceuticals or biotech, by clubs or groups in a hobby like fishing or gardening, in areas of daily life like entertainment or public education, etc. Can this now be done in the same ways worldwide because new knowledge is both an input to this process and an output from it? In some examples the TPM and AKM are designed to transform the world into one room by resizing our sphere of interpersonal contacts to the scale of a Shared Planetary Life Space(s) plus Active Knowledge, multiple native and alternate Teleportal devices, new types of networks, systems and infrastructures that together provide access to people, places, tools, resources, etc. Could these enable one shared room that might simultaneously be large enough and small enough for everyone to “rub elbows?”

Economics of scale apply. Advances in know-how can be received and used by a plurality simultaneously without using them up—in fact, more use multiplies the value of each advance because the fixed cost of creating a new advance is distributed over more users, so prices can be driven down faster while profits are increased—the same returns to scale that have helped transform personal lives and create developed economies during the last two centuries. The bigger the market the more money is made: Sell one advance at a high price and go broke, sell a thousand that are each very expensive and break even, but sell millions at a low price and get rich while helping spread that advance to many customers. Abundance becomes a central engine of greater personal success, collective advances, and widely enjoyed welfare. The Alternate Reality described herein is designed to bring into existence a similar wealth of enjoyment from human knowledge, abundance and entertainment—by introducing new means to expand this process to new fields and move increasing numbers of individuals and companies to humanity's leading edge at lower prices with larger profits as we “grow forward.”

This TPM also addresses the business issue of enabling (an optional) business evolution from today's dominant silo platforms (such as mobile phone networks, PCs, and cable/satellite television) to a world of integrated and productive Teleportal connectivity. Some current communications and product platforms are supported by business models that lock in their customers. The “network industries” that lock in customers include computers (Windows), telecommunications (cell phone contracts, landline phones, networks like the Internet), broadcasting/television delivery (cable TV and satellite), etc. In contrast, the TPM provides the ability to support both current lock-in as Subsidiary Devices and new business models, permitting their evolution into more effective devices and systems that may produce business growth—because both currently dominant companies and new companies can use these advances within existing business models to preserve customer relationships while entering new markets with either current or new business models—that choice remains with each corporation and vendor.

Whether the business models stay the same or evolve, there are potentially large technology changes and outcome shifts in an Alternate Reality. We started with a culture built on printed books and newspapers, landline telephones, and television with only a few oligopolistic networks. Digital communications and media technologies developed in separate silos to become PCs with individual software applications, the Internet silo, cell phones, and televisions with a plurality of channels and (gradually) on-demand TV. This has produced a “three-screen” marketplace whereby many now use the three screens of computers, televisions and cell phones—even though they are fairly separate and only somewhat interconnected. The rise of the Internet has lead to widespread personal creation and distribution of personalized news (blogs, micro-blogging, citizen journalism, etc.), videos, entertainments, product reviews, comments, and other types of content that are based on individual tastes or personal experience, rather than institutional market power (such as from large entertainment or news companies, or major advertisers). Even without a TPM there is a growing emergence of new types of personal-based communications devices, uses, markets, interconnections and infrastructure that break from the past to create a more direct chain from where we each of us wants to go directly to the outcomes people want—rather than a collective “spectacle culture” and brands to which people are guided and limited. With the TPM, however, goals and intentions are surfaced as implicit in activities, actual success is tracked, gaps are identified and active knowledge deliveries help a plurality cross the bridge from desires to achievements.

Also a focus in the TPM's Alternate Reality, different cognitive and communication styles are emphasized such as more visual screens use with less use of paper. At this time, there may be a change along these lines which is leading to the decline of paper-dependent and printing-dependent industries such as newspapers and book publishing, and the rise of more digital, visual and new media channels such as e-readers, electronic articles, blogging, twitter, video over the Internet and social media that allows personal choices, personal expertise and personal goals to replace institution-driven profit-focused world views, with skimming of numerous resources (by means such as search engines, portals, linking, navigation, etc.). This new cognitive style replaces expensive corporate marketing and news media “spectacle” reporting that compel product-focused lifestyles, information, services, belief systems content, and the creation or expansion of needs and wants in large numbers of consumers. In this Alternate Reality there are optional transitions in some examples from large sources toward individual and one's chosen group sources; from one “self” per person to each person having (optional) multiple identities; from mass culture to selective filtering of what's wanted (even into individually controlled Shared Planetary Life Spaces, whose boundaries are attached to one or a plurality of multiple identities); from reading and interpreting institutional messages to independent and individual creation and selection of personally relevant information; from fewer broadcasters to potentially voluminous resources for recording, reinterpreting and rebroadcasting; along with large and more sensory-based (headline, pictorial, video and aural) cognitive styles with “always on” digital connectivity that includes: More scanning and skimming of visual layouts and visual content. A plurality of available resources and connections from LTPs (Local Teleportals), RTPs (Remote Teleportals), TPBNs (Teleportal Broadcast Networks created and run by individuals), TPANs (Teleportal Application Networks), remote control of electronic sources and devices through RCTP (Remote Control Teleportaling) by direct control via a Teleportal Device or through Teleportals located in varied locations, personal connections via MTPs (Mobile Teleportals) and VTPs (Virtual Teleportals), and more. Increasing volume, variety, speed and density of visual information and visual media; including more frequent simultaneous use of multiple media with shorter attention spans; within separately focused and bounded Shared Planetary Life Spaces. Growing replacement of long-form printed media such as newspapers and books in a multi-generation transition that may turn long-form content printing (e.g., longer than 3-5 pages) into merely one type of specialized media (e.g., paper is just one format and only sometimes dominant). Growing replacement of “presence” from a physical location to one's chosen connections, with most of those connections not physically present at most times, but instead communications-dependent through a variety of devices and media. The evolution of devices and technologies that reflect these cognitive and perceptual transformations, so they can be more fully realized. And more.

In sum, this Alternate Reality may provide options for the evolution of our cognitive reality with new utility(ies), new devices, new life spaces and more—for a more interactive digital reality that may be more successful, to provide the means for achieving and benefiting from new types of economic growth, quality of life improvements, and human performance advantages that may help solve the growing crises of our timeline while replacing scarcity and poverty with an accelerated expansion of abundance, prosperity and the multiple types of happiness each person chooses.

In some examples the ARTPM provides an Alternate Reality that integrates advancing know-how, resources, devices, learning, entertainment and media so that a plurality of users might gain increasing capabilities and achievements with increased connections, speed and scope. From the viewpoint of an Alternate Reality Teleportal Machine (ARTPM) in some examples this is designed to provide new ways to advance economically by delivering human success to a plurality of individuals and groups. It also includes integration of a plurality of devices, siloed business/product platforms, and existing business models so that (r)evolutionary transformations may potentially be achieved.

In this “Alternate Reality's” timeline, humanity has embarked on a rare period of continuous improvements and transformations: What are devices (including products, equipment, services, applications, information, entertainment, networks, etc.)? Increasing ranges and types of “devices” are gaining enough computing, communications and video capabilities to re-open the basic definitions of what “devices” are and should become. A historic parallel is the transformation of engines into small electric motors, which then disappeared into numerous products (such as appliances), with the companion delivery of universal electric power by means of standardized plugs and wall sockets—making the electric motor an embedded, invisible tool that is unseen while people do a wide ranges of tasks. The ARTPM's implication that human success may undertake a similar evolution and be delivered throughout our daily lives as routinely as electricity from a wall socket may seem startling, but it is just one part. Today's three main screens are the computer, cell phone and television. In the TPM Alternate Reality these three screens may remain the same and fit that environment, or they may disappear into integrated parts of a different digital environment whose Teleportal Devices may transform the range and scope of our personal perception and life spaces, along with our individual identities, capacities and achievements.

The TPM's Alternate Reality provides dynamic new connections between uses and needs with vendors and device designers—a process herein named “AnthroTectonics.” New use-based designs are surfaced as a by-product from the AKM, ARM, TPU and TPM, and systems for this are enumerated. In some examples selling bundles of products and services with targeted levels of success or satisfaction may result, such as in some examples a governance's lifestyle plan for “Upward Mobility to Lifetime Luxury” that guides one's consumption of housing, transportation, financial services, products, services, and more—along with integrated guidance in achieving many types of personal and career goals successfully. Together, these and other ARTPM advances may provide expanded goals, processes and visibly reported results; with quantified collective knowledge and desires resulting in new types of digitally connected relationships in some examples between people, vendors, governances, etc. The companies and organizations that capture market share by being able to use these new Alternate Reality systems and their resulting devices advances can also control intellectual property rights from many new usage-driven designs of numerous types of devices, systems, applications, etc. The combination of these competitive advantages (ARTPM systems-created first-mover intellectual properties, numerous advances in devices and processes, and the resulting deeper relationships between customers and vendor organizations) may afford strong new commercial opportunities. In some examples those customers may receive new successes as a new normal part of everyday life—with vendors competing to create and deliver personal and/or lifetime success paths that capture family-level customer relationships that last decades, perhaps throughout entire lives.

This potential “marriage” between powerful corporations, new ways to “own” markets, and systems and processes that attach corporations with their customers' lifetime goals could lead to a growing realization that an Alternate Reality option may exist for our current reality, namely: “If you want a better reality, choose it.”

Because our current reality repeatedly suffers serious crises, at some future crisis the combination of powerful corporations who are able to deliver a growing range of human successes and the demands of a larger crisis may connect. Could the fortunes of those global companies rise at that time by using their new capabilities to help drive and deliver new types of successes? Could the fortunes of humanity—first in that crisis and then in its prosperity after that—rise as well?

This innovation's multiple components were created as steps toward a new portfolio that might demonstrate that humanity is becoming able to create and control reality—actually turning it into multiple realities, multiple identities, multiple Shared Planetary Life Spaces, and more—with one of the steps into this future an attempt to deliver a more connected and success-focused stage of history—one where the dreams and choices of individuals, groups, companies, countries and others may pursue self-realization. When the transformations are considered together, each person may gain the ability to specify multiple realities along with the ability to switch between them—more than humanity gaining control of reality, this may be the start of each person's control over it.

Is it possible that a new era might emerge when one of the improvement options could be: “If you want a better reality, switch it.”

In this document, we sometimes use certain phrases to refer to examples or broad concepts or both that relate to corresponding phrases that appear in current and future claims. We do not mean to imply that there is necessarily a direct and complete overlap in their meaning Yet, roughly speaking, the reader can infer an association between the following: “Alternate Reality” or “Expandaverse” and the broad concepts to which at least some of the claims are directed; “altered reality” and Alternate Reality; “Shared Planetary Life Spaces” and “virtual places” and “digital presence”; “Alternate Reality Teleportal Machine” and a wide variety of devices, resources, networks, and connections; “Utility” and a publicly accessible network, network infrastructure, and resources, and in some cases cooperating devices that use the network, the infrastructure, and the resources; “Active Knowledge Machine” and “active knowledge management facility”; “Active Knowledge Interactions” and active knowledge accumulation and dissemination; “Active Knowledge” and information associated with activities and derived from users and for which users have goals; “Teleportal Devices” or “TP Devices” and electronic devices that are used at geographically separate locations to acquire and present items of content; “Alternate Realities Machine” and a facility to manage altered realities; “Quality of Life (QoL)” and goals, interests, successes, and combinations of them.

In general, in an aspect, electronic systems acquire items of audio, video, or other media, or other data, or other content, in geographically separate acquisition places. A publicly available set of conventions, with which any arbitrary system can comply, is used to enable the items of content to be carried on a publicly accessible network infrastructure. On the publicly accessible network infrastructure, services are provided that include selecting, from among the items of content, items for presentation to recipients through electronic devices at other places. The selecting is based on (a) expressed interests or goals of the recipients, to whom the items will be presented, and (b) variable boundary principles that encompass boundary preferences derived both from sources of the items of content and from the recipients to whom the items are to be presented. The variable boundary principles define a range of regimes for passing at least some of the items to the recipients and blocking at least some of the items from the recipients. The selected items of content are delivered to the recipients through the network infrastructure to the devices at the other places in compliance with the publicly available set of conventions. At least some of the selected items are presented to the recipients at the presentation places automatically, continuously, and in real time, putting aside the latency of the network infrastructure.

Implementations may include one or more of the following features. The electronic systems include cameras, video cameras, mobile phones, microphones, speakers, and computers. The electronic systems include software to perform functions associated with the acquisition of the items. The publicly available set of conventions also enable the items of content to be processed on the publicly accessible network infrastructure. The services provided on the publicly accessible network infrastructure are provided by software. At least one of the actions of (a) using electronic systems to acquire items in acquisition places, (b) using a publicly available set of conventions, (c) providing services, (d) delivering selected items, and (e) presenting some of the selected items, is performed by resources that include hardware, software, or a combination of hardware and software, that are part of the network infrastructure, part of the electronic devices, or part of presentation devices at the presentation places, or a combination of them. At least some of the acquisition places are also presentation places. The resources include controller resources that remotely control other controlled resources. The controlled resources include at least one of computers, television set-top boxes, digital video recorders (DVRs), and mobile phones. The usage of at least some of the resources is shared. The shared usage may include remote usage, local usage, or networked usage. The items are acquired by people using resources. At least one of the actions is performed by at least one of the resources in the context of a revenue generating business model. The revenue is generated in connection with at least one of (a) using electronic systems to acquire items in acquisition places, (b) using a publicly available set of conventions, (c) providing services, (d) delivering selected items, (e) presenting some of the selected items, (f) or advertising in connection with any of them. The revenue is generated using hardware, software, or a combination of hardware and software, that are part of the network infrastructure, part of the electronic devices, or part of presentation devices at the presentation places, or a combination of them.

In general, in an aspect, items of audio, video, other media, or other data, or other content are acquired from sources located in geographically separate places. The items of content are communicated to a network infrastructure. On the network infrastructure, services are provided that include selecting, from among the acquired items of content, items for presentation to recipients at other places, the selecting being based on (a) expressed interests or goals of the recipients to whom the items will be presented, and (b) variable boundary screening principles that are based on source preferences derived from the sources of the content and recipient preferences derived from recipients to whom the items are to be presented. The items of content are transmitted to the other places, and at least some of the selected items are presented to the recipients at the other places automatically, continuously, and in real time, relative to their acquisition, taking account of time required to communicate, select, and transmit the items.

Implementations may include one or more of the following features. At least one of the actions of (a) acquiring items, (b) communicating items, (c) providing services, (d) transmitting items, and (e) presenting some of the selected items, is performed by resources that include hardware, software, or a combination of hardware and software, that are part of the network infrastructure, part of the electronic devices, or part of presentation devices at the presentation places, or a combination of them. The expressed interests or goals of the recipients, to whom the items will be presented, define characteristics of an alternate reality, relative to an existing reality that is represented by real interactions between those recipients and the electronic devices located at the presentation places. The acquired items of content include (a) active knowledge, associated with activities, derived from users of at least some of the electronic systems at the separate places, for which the users have goals, (b) information about success of the users in reaching the goals, and (c) guidance information for use in guiding the users to reach the goals, the guidance information having been adjusted based on the success information, and the adjusted guidance information is presented to the users. The electronic systems include digital cameras. The activities include actions of the users on the electronic systems, and the information about success is generated by the electronic systems as a result of the actions. The guidance information is presented to the users through the electronic systems. The guidance information is presented to the users through systems other than the electronic systems. The presenting of the selected items to the recipients at the presentation places and the acquisition of items at the acquisition places establish virtual shared places that are at least partly real and at least partly not real, and the recipients are enabled to experience having presences in the virtual places. The network infrastructure includes an accessible utility that is implemented by devices, can communicate the items of content from the acquisition places to the presentation places based on the conventions, and provides services on the network infrastructure associated with receiving, processing, and delivering the items of content. The items are acquired at digital cameras in the acquisition places, the interests and goals of the recipients relate to photography. The recipients include users of the digital cameras, and the selected items that are presented to the recipients include information for taking better photographs using the digital cameras. The recipients are designers of digital cameras, and the selected items that are presented to the designers include information for improving designs of the digital cameras. The resources provide governances. The items relate to activities at the acquisition places and the items selected for presentation to recipients at the other places concern a governance for at least one of the recipients. The variable boundary principles encompass, for each of the recipients to whom the items are to be presented, more than one identity. Coordinated globally accessible directories of the items of content are maintained, the communications of the items of content, the places, the recipients, the interests, the goals, and the variable boundary principles.

In general, in an aspect, a method includes using electronic devices at geographically separate locations to acquire items of content and to present the items of content to geographically separate participants as part of virtual places, each of which is persistent and at least partially remote with respect to the participants, and using one or more presence management facilities to enable two or more of the participants to be present in one or more of the virtual places at any time, continuously, and simultaneously.

Implementations may include one or more of the following features. One or more background management facilities are used to manage the items of content in a manner to present and update background contexts for the virtual places as experienced by the participants. One or more of the background management facilities operates at multiple locations. The different background contexts are presented to different participants in a given virtual place. One or more of the background management facilities changes one or more background contexts of a virtual place by changing one or more locations of the background context. The background context of a virtual place includes commercial information. The background context of a virtual place includes any arbitrary location. The background context includes items of content representing real places. The background context includes items of content representing real objects. The real objects include advertisements, brands of products, buildings, and interiors of buildings. The background context includes items of content representing non-real places. The background context includes items of content representing non-real objects. The non-real objects include CGI advertisements, CGI illustrations of brands of products, and buildings. One or more of the background management facilities responds to a participant's indicating items of content to be included or excluded in the background context. The participant indicates items of content associated with the participant's presence that are to be included or excluded in the participant's presence as experienced by other participants. The participant indicates items of content associated with another participant's presence that are to be included or excluded in the other participant's presence as experienced by the participant. One or more of the background management facilities presents and updates background contexts as a network facility. The background contexts are updated in the background without explicit action by any of the participants. One or more of the background management facilities presents and updates background contexts without explicit action by any of the participants. One or more of the background management facilities presents and updates background contexts for a given one of the virtual places differently for different participants who have presences in the virtual place. One or more of the background management facilities responds to at least one of: participant choices, automated settings, a participant's physical location, and authorizations. One or more of the background management facilities presents and updates background contexts for the virtual places using items of content for partial background contexts, items of content from distributed sources, pieced together items of content, and substitution of non-real items of content for real items of content. One or more of the background management facilities includes a service that provides updating of at least one of the following: background contexts of virtual places, commercial messages, locations, products, and presences. One or more of the presence management facilities receives state information from devices and identities used by a participant and determines a state of the presence of the participant in at least one of the virtual places. One or more of the presence management facilities receives state information from devices and identities used by a participant and determines a state of the presence of the participant in a real place. The presence state is made available for use by presence-aware services. The presence state is updated by the presence management facility. The presence state includes the availability of the user to be present in the virtual place. One or more of the presence management facilities controls the visibility of the presence states of participants. One or more of the presence management facilities manages presence connections automatically based on the presence states.

In general, in an aspect, a method includes using electronic devices at geographically separate locations to acquire items of content associated with virtual events that have defined times and purposes and occur in virtual places, and to present the items of content to geographically separate participants as part of the virtual events in the virtual places, each of the virtual places and virtual events being persistent and at least partially remote with respect to the participants, and using a virtual event management facility to enable two or more of the participants to have a presence at one or more of the virtual events at any time, continuously, and simultaneously.

Implementations may include one or more of the following features. The virtual events include real events that occur in real places and have virtual presences of participants. The virtual events include elements of real events occurring in real time in real locations. The purposes of the events include at least one of business, education, entertainment, social service, news, governance, and nature. The participants include at least one of viewers, audience members, presenters, entertainers, administrators, officials, and educators. A background management facility is used to manage the items of content in a manner to present and update background contexts for the events as experienced by participants. One or more virtual event management facilities manages an extent of exposure of participants in the events to one another. The participants can interact with one another while present at the events. The participants can view or identify other participants at the events. One or more virtual event management facilities is scalable and fault tolerant. One or more of the presence management facilities is scalable and fault tolerant. The virtual event management facility enables participants to locate virtual events using at least one of: maps, dashboards, search engines, categories, lists, APIs of applications, preset alerts, social networking media, and widgets, modules, or components exposed by applications, services, networks, or portals. The virtual event management facility regulates admission or participation by participants in virtual events based on at least one of: price, pre-purchased admission, membership, security, or credentials.

In general, in an aspect, a method includes using electronic devices at geographically separate locations to acquire items of content and to present the items of content to geographically separate participants as part of virtual places, each of which is persistent and at least partially local and at least partially remote with respect to the participants, using a presence management facility to enable two or more of the participants to be present in one or more of the virtual places at any time, continuously, and simultaneously, the presence management facility enabling a participant to indicate a focus for at least one of the virtual places in which the participant has a presence, the focus causing the presence of at least one of the other participants to be more prominent in the virtual place than the presences of other participants in the virtual place, as experienced by the participant who has indicated the focus.

Implementations may include one or more of the following features. Presenting items of content to geographically separate participants includes opening a virtual place with all of the participants of the virtual place present in an open connection. In the opened connection, one or more participants focuses the connection so they are together in an immediate virtual space. The focus causes the one participant to be more easily seen or heard than the other participants.

In general, in an aspect, a method includes enabling a participant to become present in a virtual place by selecting one identity of the participant which the user wishes to be present in the virtual place, invoking the virtual place to become present as a selected identity, indicating a focus for the virtual place to cause the presence of at least one other participant in the virtual place to be more prominent than the presences of other participants in the virtual place, as experienced by the participant who has indicated the focus,

Implementations may include one or more of the following features. The identity is selected manually by the participant. The identity is selected by the participant using a particular device to become present in the virtual place. The identities include identities associated with personal activities of the participant and the virtual places include places that are compatible with the identities. The participant includes a commercial enterprise, the identities include commercial contexts in which the commercial enterprise operates, and the virtual places include places that are compatible with the commercial contexts. The participant includes a participant involved in a mobile enterprise, the identities include contexts involving mobile activities, and the virtual places include places in which the mobile activities occur. The participant selects a device through which to become present in the virtual place. The focus is with respect to categories of connection associated with the presences of the participants in the virtual places. The categories include at least one of the following: multimedia, audio only, observational only, one-way only, and two-way.

In general, in an aspect, a method includes using electronic devices at geographically separate locations to acquire items of content and to present the items of content to geographically separate participants as part of virtual places, each of which is persistent and at least partially local and at least partially remote with respect to the participants, and using a connection management facility to manage connections between participants with respect to their presences in the virtual places.

Implementations may include one or more of the following features. The connection management facility opens, maintains, and closes connections based on devices and identities being used by participants. The connections are opened, maintained, and closed automatically. The connection management facility opens and closes presences in the virtual places as needed. The connection management facility maintains the presence status of identities of participants in the virtual places. The connection management facility focuses the connections in the virtual places.

In general, in an aspect, a method includes using electronic devices at geographically separate locations to acquire items of content and to present the items of content to geographically separate participants as part of virtual places, each of which is persistent and at least partially local and at least partially remote with respect to the participants, and using a presence facility to derive and distribute presence information about presence of the participants in the virtual places.

Implementations may include one or more of the following features. The presence information is derived from at least one of the following: the participants' activities with the devices, the participants' presences using various identities, the participants' presences in the virtual places, and the participants' presences in real places. The presence facility responds to participant settings and administrator settings. The settings include at least one of: adding or removing identities, adding or removing virtual places, adding or removing devices, changing presence rules, and changing visibility or privacy settings. The presence facility manages presence boundaries by managing access to and display of presence information in response to at least one of: rules, policies, access types, selected boundaries, and settings.

In general, in an aspect, a method includes using electronic devices at geographically separate locations to acquire and present items of content, and using a place management facility to manage the acquisition and presentation of the items of content in a manner to maintain virtual places, each of which is persistent and at least partially local and at least partially remote, and in each of which two or more participants can be present at any time, continuously, and simultaneously.

Implementations may include one or more of the following features. The items of content include at least one of: a real-time presence of a remote person, a real-time display of a separately acquired background such as a place, and a separately acquired background content such as an advertisement, product, building, or presentation. The presence is embodied in at least one of video, images, audio, text, or chat. The place management facility does at least one of the following with respect to the items of content: auto-scale, auto-resize, auto-align, and in some cases auto-rotate. The auto activities include participants, backgrounds, and background content. One or more place management facilities enable the participant to be present in the remote part of a virtual place from any arbitrary real place at which the participant is present. The background aspect of the virtual place is presented as a selected remote place that may be different from the actual remote part of the virtual place. One or more of the place management facilities controls access by the participants to each of the virtual places. One or more of the place management facilities controls visibility of the participants in each of the virtual places. The presentation of the items of content includes real-time video and audio of more than one participant having presences in a virtual place. The presentation of the items of content includes real-time video and audio of one participant in more than one of the virtual places simultaneously. The access is controlled electronically, physically, or both, to exclude parties. The access is controlled to regulate presences of participants at events. The access is controlled using at least one of: white lists, black lists, scripts, biometric identification, hardware devices, logins to the place management facility, logins other than to one or more place management facilities, paid admission, security code, membership credential, authorization, access cards or badges, or door key pads. At least one of the actions of (a) acquiring items, (b) presenting items, and (c) managing acquisition and presentation of items is performed by resources that include hardware, software, or a combination of hardware and software, that are part of the separate locations. The hardware and software include at least one of: video equipment, audio equipment, sensors, processors, memory, storage, software, computers, handheld devices, and network. The separate locations include participants who are senders and receivers. The managing presentation of the items is performed by one or more of the network facilities not necessarily operating at any of the separate locations. The presentation of the items of content includes at least one of: changing backgrounds associated with presences of participants; presenting a common background associated with two or more of the presences of participants; changing parts of backgrounds associated with presences of participants; presenting commercial information in backgrounds associated with presences of participants; making background changes automatically based on profiles, settings, locations, and other information; and making background changes in response to manually entered instructions of the participants. The presentation of the items of content includes replacing backgrounds associated with presences of the participants with replacement backgrounds without informing participants that a replacement has been made. One or more place management facilities manage shared connections to permit focused connections among the participants who are present in the virtual places. The shared connections permit focused connections in at least one of the following modes: in events, one-to-one, group, meeting, education, broadcast, collaboration, presentation, entertainment, sports, game, and conference. The shared connections are provided for events such as business, education, entertainment, sports, games, social service, news, governance, nature and live interactions of participants. The media for the connections include at least one of: video, audio, text, chat, IM, email, asynchronous, and shared tools. The connections are carried on at least one of the following transport media: the Internet, a local area network, a wide area network, the public switched telephone network, a cellular network, or a wireless network. The shared connections are subjected to at least one of the following processes: recording, storing, editing, re-communicating, and re-broadcasting. One or more of the place management facilities permits access by non-participants to information about at least one of: virtual places, presences, participants, identities, status, activities, locations, resources, tools, applications, and communications. One or more of the place management facilities permits participants to remotely control electronic devices at remote locations of the virtual places in which they are present. One or more of the place management facilities permits participants to share one or more of the electronic devices. The sharing includes authorizing sharing by at least one of the following: manually, programmatically by authorizing automated sharing, automated sign ups with or without payments, or freely. The shared electronic devices are shared locally or remotely through a network and as permitted by a party who controls the device. The access is permitted to the information through an application programming interface. The application programming interface permits access by independent applications and services. The participants have virtual identities that each have at least one presence in at least one of the virtual places. Each of the participants has more than one virtual identity in each of the places. The multiple virtual identities of each of the participants can have presences in a virtual place at a given time. Each of the virtual identities is globally unique within one or more of the place management facilities. One or more of the place management facilities enables each of the participants to have a presence in remote parts of the virtual places. One or more of the place management facilities manages one or more groups of the participants. One or more of the place management facilities manages one or more groups of presences of participants. One or more of the place management facility manages events that are limited in time and purpose and at which participants can have presences. The participants may be observers or participants at the events. One or more of the place management facilities manages the visibility of participants to one and other at the events. The visibility includes at least one of: presence with everyone who is at the event publicly, presence only with participants who share one of the virtual places, presence only with participants who satisfy filters, including searches, set by a participant, and invisible presence. At least one of the participants includes a person. At least one of the participants includes a resource. The resource includes a tool, device, or application. The resource includes a remote location that has been substituted for a background of a virtual place. The resource includes items of content including commercial information. One or more of the place management facilities maintains records related to at least one of resources, participants, identities, presences, groups, locations, virtual places, aggregations of large numbers of presences, and events. Maintaining the records includes automatically receiving information about uses or activities of the resources, participants, identities, presences, groups, locations, participants' changes during focused connections in virtual places, and virtual places. One or more of the place management facilities recognizes the presence of participants in virtual places. One or more of the place management facilities manages a visibility to other participants of the presence of participants in the virtual places. The visibility is based on settings associated with participants, groups, virtual places, rules, and non-participants. The visibility is managed in at least two different possible levels of privacy. The visibility includes information about the participants' presence and data of the participants that is governed by privacy constraints. The privacy constraints include rules and settings selected by individual participants. The privacy constraints include that if the presence is private, the data of the participant is private, if the presence is secret then the existence of the presence and its data is invisible. The visibility is managed with respect to permitted types of communication to and from the participants. One or more of the place management facilities provides finding services to find at least one of participants, identities, presences, virtual places, connections, events, large events with many presences, locations, and resources. The finding services include at least one of: a map, a dashboard, a search, categories, lists, APIs alerts, and notifications. One or more of the place management facilities controls each participant's experience of having a presence in a virtual place, by filtering. The filtering is of at least one of: identities, participants, presences, resources, groups, and connections. The resources include tools, devices, or applications. The filtering is determined by at least one value or goal associated with the virtual place or with the participant. The value or goal includes at least one of: family or social values, spiritual values, commerce, politics, business, governance, personal, social, group, mobile, invisible or behavioral goals. Each of the virtual places spans two or more geographic locations.

In general, in an aspect, a method includes using electronic systems to acquire items of audio, video, or other media, or other data, or other content, in geographically separate acquisition places, using a publicly available set of conventions, with which any arbitrary system can comply, to enable the items of content to be carried on a publicly accessible network infrastructure, providing, on the publicly accessible network infrastructure, services that include selecting, from among the items of content, items for presentation to recipients through electronic devices at other places, the selecting being based on (a) expressed interests or goals of the recipients, to whom the items will be presented, and (b) variable boundary principles that encompass boundary preferences derived both from sources of the items of content and from the recipients to whom the items are to be presented, the variable boundary principles defining a range of regimes for passing at least some of the items to the recipients and blocking at least some of the items from the recipients, delivering the selected items of content to the recipients through the network infrastructure to the devices at the other places in compliance with the publicly available set of conventions, and presenting at least some of the selected items to the recipients at the presentation places automatically, continuously, and in real time, putting aside the latency of the network infrastructure.

Implementations may include one or more of the following features. The electronic systems include at least one of the following: cameras, video cameras, mobile phones, microphones, speakers, computers, landline telephones, VOIP phone lines, wearable computing devices, cameras built into mobile devices, PCs, laptops, stationary internet appliances, netbooks, tablets, e-pads, mobile internet appliances, online game systems, internet-enabled televisions, television set-top boxes, DVR's (digital video recorders), digital cameras, surveillance cameras, sensors, biometric sensors, personal monitors, presence detectors, web applications, websites, web services, and interactive web content. The electronic systems include software to perform functions associated with the acquisition of the items. The publicly available set of conventions also enable the items of content to be processed on the publicly accessible network infrastructure. The services provided on the publicly accessible network infrastructure are provided by software. At least one of the actions of (a) using electronic systems to acquire items in acquisition places, (b) using a publicly available set of conventions, (c) providing services, (d) delivering selected items, and (e) presenting some of the selected items, is performed by resources that include hardware, software, or a combination of hardware and software, that are part of the network infrastructure, part of the electronic devices, or part of presentation devices at the presentation places, or a combination of them. At least some of the acquisition places are also presentation places. The resources include controller resources that remotely control other, controlled resources. The controlled resources include at least one of computers, television set-top boxes, digital video recorders (DVRs), and mobile phones. The usage of at least some of the resources is shared. The shared usage may include remote usage, local usage, or networked usage. The items are acquired people using resources. At least one of the actions is performed by at least one of the resources in the context of a revenue generating business model. The revenue is generated in connection with at least one of (a) using electronic systems to acquire items in acquisition places, (b) using a publicly available set of conventions, (c) providing services, (d) delivering selected items, (e) presenting some of the selected items, (f) or advertising in connection with any of them. The revenue is generated using hardware, software, or a combination of hardware and software, that are part of the network infrastructure, part of the electronic devices, or part of presentation devices at the presentation places, or a combination of them.

In general, in an aspect, electronic devices are used at geographically separate locations to acquire and present items of content. A place management facility manages the acquisition and presentation of the items of content in a manner to maintain virtual places. Each of the virtual places is persistent and at least partially local and at least partially remote. In each of the virtual places, two or more participants can be present at any time, continuously, and simultaneously. The place management facility enables the participant to be present in the remote part of a virtual place from any arbitrary real place at which the participant is present. The place management facility controls access by the participants to each of the virtual places. The access is controlled electronically, physically, or both, to exclude intruders.

Implementations may include one or more of the following features. The access is controlled using at least one of: white lists, black lists, scripts, biometric identification, hardware devices, logins to the place management facility, logins other than to the place management facility, access cards or badges, or door key pads. At least one of the actions of (a) acquiring items, (b) presenting items, and (c) managing acquisition and presentation of items is performed by resources that include hardware, software, or a combination of hardware and software, that are part of the separate locations. The place management facility manages shared connections to permit communications among the participants who are present in the virtual places. The shared connections permit communications in at least one of the following modes: one-to-one, group, meeting, classroom, broadcast, and conference. The communications on shared connections are optionally subjected to at least one of the following processes: recording, storing, editing, re-communicating, and re-broadcasting. The place management facility permits access by non-participants to information about at least one of: virtual places, presences, participants, identities, resources, tools, applications, and communications. The place management facility permits participants to remotely control electronic devices at remote locations of the virtual places in which they are present. The place management facility permits participants to share one or more of the electronic devices. The sharing includes authorizing sharing by at least one of the following: (1) manually, (2) programmatically by authorizing automated sharing, (3) automated sign ups with or without payments, or (4) freely The shared electronic devices are shared locally or remotely through a network and as permitted by a party who controls the device. The access is permitted to the information through an application programming interface. The system enables the participants to have virtual identities that each have at least one presence in at least one of the virtual places. The place management facility enables each of the participants to have more than one virtual identity in each of the places. The multiple virtual identities of each of the participants can have presences in the virtual place at a given time. Each of the virtual identities is globally unique within the place management facility. The place management facility enables each of the participants to have a presence in remote parts of the virtual places. The place management facility manages one or more groups of the participants. The place management facility manages one or more groups of presences of participants. At least one of the participants includes a person. At least one of the participants includes a resource. The resource includes a tool, device, or application. The place management facility maintains records related to at least one of resources, participants, identities, presences, groups, locations, and virtual places. Maintaining the records includes automatically receiving information about uses or activities of the resources, participants, identities, presences, groups, locations, and virtual places. The place management facility recognizes the presence of participants in virtual places. The place management facility manages a visibility to other participants of the presence of participants in the virtual places. The visibility is managed in at least two different possible levels of privacy. The visibility includes information about the participants' presence and data of the participants that is governed by privacy constraints. The privacy constraints include that (1) if the presence is private, the data of the participant is private, (2) if the presence is secret then the existence of the presence and its data is invisible. The visibility is managed with respect to permitted types of communication to and from the participants. The place management facility provides finding services to find at least one of participants, identities, presences, virtual places, connections, locations, and resources. The place management facility controls each participant's experience of having a presence in a virtual place, by filtering. The filtering is of at least one of: identities, participants, presences, resources, groups, and communications. The resources include tools, devices, or applications. The filtering is determined by at least one value or goal associated with the virtual place or with the participant. The value or goal includes at least one of: family or social values, spiritual values, or behavioral goals. Each of the virtual places spans multiple geographic locations.

In general, in an aspect, an active knowledge management facility is operated with respect to participants who have at least one expressed goal related to at least one common activity. The active knowledge management facility accumulates information about performance of the common activity by the participants and information about success of the participants in achieving the goal, from electronic devices at geographically separate locations. The information is accumulated through a network in accordance with a set of predefined conventions for how to express the performance and success information. The active knowledge management facility adjusts guidance information that guides participants on how to reach the goal, based on the accumulated information.

Implementations may include one or more of the following features. The active knowledge management facility disseminates the adjusted participant guidance information. The electronic systems include digital cameras. The activities include actions of the users on the electronic systems, and the information about success is generated by the electronic systems as a result of the actions. The adjusted participant guidance information is disseminated by the same electronic devices from which the performance information is accumulated. The adjusted participant guidance information is disseminated by devices other than the electronic devices from which the performance information is accumulated. The active knowledge management facility includes distributed processing of the information at the electronic devices. The active knowledge management facility includes central processing of the information on behalf of the electronic devices. The active knowledge management facility includes hybrid processing of the information at the electronic devices and centrally. The participants include providers of goods or services to help other participants reach the goal. At least one of the expressed goals is shared by more than one of the participants. At least part of the information is accumulated automatically. At least part of the information is accumulated manually. The information about success of the participants in achieving the goal includes a quality of performance or a level of satisfaction. The adjusted participant guidance information includes the best guidance information for reaching the goal. At least some of the adjusted participant guidance information is disseminated in exchange for consideration. The activity information is made available to providers of guidance information. The activity information is made available to the participants. The success information is made available to providers of guidance information. The success information is made available to the participants. The activity information is made available to providers of goal reaching devices or services. The success information is made available to providers of goal reaching devices or services. The guidance information guides participants in the use of electronic devices. The activity information and the success information are accumulated at virtual places in which the participants have presences. The guidance information is used to alter a reality of the participants.

In general, in an aspect, by means of an electronically accessible persistent utility on a network, at all times and at geographically separate locations, information is accepted from and delivered to any arbitrary electronic devices or arbitrary processes. The information, which is communicated on the network, is expressed in accordance with conventions that are predefined to facilitate altering a reality that is perceived by participants who are using the electronic devices or the processes at the locations.

Implementations may include one or more of the following features. The altering of the reality is associated with becoming more successful in activities for which the participants share a goal. The altering of the reality includes providing virtual places that are in part local and in part remote to each of the separate locations and in which the participants can be present. The altering of the reality includes providing multiple altered realities for each of the participants. The arbitrary electronic devices or arbitrary processes include at least one of: televisions, telephones, computers, portable devices, players, and displays. The electronic devices and processes expose user-interface and real-world capture and presentation functions to the participants. The electronic devices and processes incorporate proprietary technology or are distributed using proprietary business arrangements, or both. At least some of the electronic devices and processes provide local functions for the participants. The local functions include local capture and presentation functions. At least some of the electronic devices and processes provide remote capture functions for participants. At least some of the electronic devices and processes include gateways between other devices and processes and the network. The utility provides services with respect to the information. The services include analyzing the information. The services include storing the information. The services include enabling access by third parties to at least some of the information. The services include recognition of an identity of a participant associated with the information. The network includes the Internet. The conventions include message syntaxes for expressing elements of the information.

In general, in an aspect, with respect to aspects of a person's reality that include interactions between the person and electronic devices that are served by a network, the person is enabled to define characteristics of an altered reality for the person or for one or more identities associated with the person. The interactions between the person or a given one of the identities of the person and each of the electronic devices are automatically regulated in accordance with the defined characteristics of the altered reality.

Implementations may include one or more of the following features. The person is enabled to define characteristics of multiple different altered realities for the person or for one or more identities associated with the person. The person is enabled to switch between altered realities. The characteristics defined for an altered reality by the person are applied to automatically regulate interactions between a second person and electronic devices. Automatically regulating the interactions includes filtering the interactions. The filtering includes filtering in, filtering out, or both. Automatically regulating the interactions includes arranging for payments to the person based on aspects of the interactions with the person or one or more of the identities. A facility enables the person to define variable boundary principles of the altered reality. The interactions include presentation of items of content to the person or to one or more identities of the person. The items of content include tools and resources. The interactions include the electronic devices receiving information from the person with respect to the person or a given one or more of the identities. The electronic devices include devices that are located remotely from the person. A performance of the altered reality is evaluated based on a defined metric. The characteristics of the altered reality are changed to improve the performance of the altered reality under the defined metric. The characteristics are changed automatically. The characteristics are changed manually. The characteristics are changed by the person with respect to the person or one or more of the identities of the person. The characteristics are changed by vendors. The characteristics are changed by governances. Automatically regulating the interactions includes providing security for the person or one or more of the identities with respect to the interactions. Regulating the interactions between the person or one or more of the identities and each of the electronic devices includes reducing or excluding the interactions. Automatically regulating interactions includes increasing the amount of the interactions between the person or one or more of the identities and the electronic devices as a proportion of all of the interactions that the person or the identity has in experiencing reality. The characteristics defined for the person or the identity include goals or interests of the person or the one or more identity. The altered reality includes a shared virtual place in which the person or the one or more of the identities has a presence. The person has multiple identities for each of which the person is enabled to define characteristics of multiple different altered realities. The person is enabled to switch between the multiple different altered realities. The electronic devices include at least one of a display device, a portable communication device, and a computer. The electronic devices include connected TVs, pads, cell phones, tablets, software, applications, TV set-top boxes, digital video recorders, telephones, mobile phones, cameras, video cameras, mobile phones, microphones, portable devices, players, displays, stand-alone electronic devices or electronic devices that are served by a network. The electronic devices are local to the person or one or more of the identities. The electronic devices are mobile. The electronic devices are remote from the person or one or more of the identities. The electronic devices are virtual. The defined characteristics of the altered reality are saved and shared with other people. The results of one or more altered realities are reported for use by another person or one or more identities who utilizes the altered realities. The results of one or more altered realities are reported and shared with other people. The characteristics of reported altered realities are retrieved by other people. The person alters the defined characteristics of the altered reality for the person or one or more of the identities over time. The characteristics are defined by the person to include specified kinds of interactions by the person or one or more of the identities with the electronic devices. The characteristics are defined by the person to exclude specified kinds of interactions by the person or one or more of the identities with the electronic devices. The characteristics are defined by the person to associate payment to the person for including specified kinds of interactions by the person or one or more of the identities in the altered reality.

In general, in an aspect, through an electronically accessible persistent utility on a network, at all times and in geographically separate locations, accepting from and delivering to mobile electronic devices or processes and remote electronic devices and processes, and communicating on the network, information expressed in accordance with conventions that are predefined to facilitate altering a reality that is perceived by participants who are using the mobile electronic devices or processes and the remote electronic devices or processes at the locations.

Implementations may include one or more of the following features. The mobile electronic devices and processes comprise at least one of mobile phones, mobile tablets, mobile pads, wearable devices, portable projectors, or a combination of them. The remote electronic devices and processes comprise non-mobile devices and processes. The mobile electronic devices and processes or the remote electronic devices and processes comprise ground-based devices and processes. The mobile electronic devices and processes or the remote electronic devices and processes comprise air-borne devices and processes. The conventions that are predefined to facilitate altering a reality that is perceived by participants comprise features that enable participants to perceive, using the devices and processes, a continuously available alternate reality associated simultaneously with more than one of the geographically separate locations.

In general, in an aspect, an apparatus comprises an electronic device arranged to communicate, through a communication network, audio and video presence content in a way (a) to maintain a continuous real-time shared presence of a local user with one or more remote users at remote locations and (b) to provide to and receive from the communication network alternate reality content that represents one or more features of a sharable alternative reality for the local user and the remote users.

Implementations may include one or more of the following features. The electronic device comprises a mobile device. The electronic device comprises a device that is remote from the local user. The electronic device is controlled remotely. The presence content comprises content that is broadcast in real time. The electronic device is arranged to provide multiple functions that effect aspects of the alternative reality. The electronic device is arranged to provide multiple sources of content that effect aspects of the alternative reality. The electronic device is arranged to acquire multiple sources of remote content that effect aspects of the alternative reality. The electronic device is arranged to use other devices to share its processing load. The electronic device is arranged to respond to control of multiple types of user input. The user input may be from a different location than a location of the device.

In general, in an aspect, a user at a single electronic device can simultaneously control features and functions of a possibly changing set of other electronic devices that acquire and present content and expose features and functions that are associated with an alternative reality is experienced by the user.

Implementations may include one or more of the following features. The single electronic device can dynamically discover the features and functions of the possibly changing set of other electronic devices. A selectable set of features and functions of the possibly changing set of other electronic devices can be displayed for the user. A replica of a control interface of at least one of the possibly changing set of other electronic devices can be displayed for the user. A replica of a subset of the control interface of at least one of the possibly changing set of other electronic devices can be displayed for the user. In conjunction with a control interface associated with at least one of the possibly changing set of other electronic devices, advertising can be displayed for the user that has been chosen based on the user's control activities or based on advertising associated with a device that the user is controlling or a combination of them. In conjunction with a control interface associated with at least one of the possibly changing set of other electronic devices, content can be displayed for the user that the user chooses based on the user's control activities.

In general, in an aspect, a single electronic device is configured to simultaneously control features and functions of a possibly changing set of other electronic devices that acquire and present content and expose features and functions that are associated with an alternative reality is experienced by a user. The single electronic device includes user interface components that expose the features and functions of the possibly changing set of other electronic devices to the user and receive control information from the user.

In general, in an aspect, separate coherent alternative digital realities can be created and delivered to users, by obtaining content portions using electronic devices locally to the user and at locations accessible on a communication network. Each of the content portions is usable as part of more than one of the coherent alternative digital realities. Content portions are selected to be part of each of the coherent alternative digital realities based on a nature of the coherent alternative reality. The selected content portions are associated as parts of the coherent alternative digital reality. Each of the coherent digital realities is made selectively accessible to users on the communication network to enable them to experience each of the coherent digital realities.

Implementations may include one or more of the following features. The associating comprises at least one of combining, adding, deleting, and transforming. Each of the digital realities is made accessible in real time. The content portions are made accessible to users for reuse in creating and delivering coherent digital realities. At least some of the selected content portions that are part of each of the coherent digital realities are accessible in real time to the users.

In general, in an aspect, a user of an electronic device can selectively access any one or more of a set of separate coherent digital realities that have been assembled from content portions obtained locally to the user and/or at remote locations accessible on a communication network. At least some of the content portions are reused in more than one of the separate coherent digital realities. At least some content portions for at least some of the coherent digital realities are presented to the user in real-time.

In general, in an aspect, in response to information about selections by users, making available to the users for presentation on electronic devices local to the users, one or more of a set of separate coherent alternative digital realities that have been assembled from content portions obtained locally to the users and/or at remote locations accessible on a communication network. At least some of the content portions are reused in more than one of the separate coherent alternative digital realities. At least some of the content portions for at least some of the coherent digital realities are presented to the users in real time.

Implementations may include one or more of the following features. At least some of the content portions and the separate coherent digital realities are distributed through the communication network so that they can be made available to the users. Different ones of the coherent digital realities share common content portions and have different content portions based on information about the users to whom the different ones of the coherent digital realities will be made available.

Implementations may include one or more of the following features. A user who has a digital presence in one of the alternative digital realities is enabled to select an attribute of other people who will have a presence with the user in the alternative digital reality. And only people having the attribute, and not others, will have a presence in the presentation of that alternative digital reality to the user. A user who has a digital presence in one of the alternative digital realities can select an attribute of other people who will have a presence with the user in the alternative digital reality and to retrieve information related to said attribute, and display the information associated with each of the other people.

In general, in an aspect, a market is maintained for a set of coherent digital realities that are assembled from content portions that are acquired by electronic devices at geographically separate locations, including some locations other than the locations of users or creators of the coherent digital realities. The content portions include real-time content portions and recorded content portions. The market is arranged to receive coherent digital realities assembled by creators and to deliver coherent digital realities selected by users. The market includes mechanisms for compensating creators and charging users.

Implementations may include one or more of the following features. A user who selects a coherent digital reality can share the user's presence in that selected coherent digital reality with other users who also select that coherent reality and have agreed to share their presence in the selected coherent reality, while excluding any who choose that coherent reality but have not agreed to share their presence.

Implementations may include one or more of the following features. Information about popularities of the coherent digital realities is collected and made available to users. Information about users who share a coherent digital reality is collected and used to enable users to select and have a presence in the coherent digital reality based on the information. A user is charged for having a presence in a coherent digital reality. Selection of and presence in a coherent digital reality are regulated by at least one of the following regulating techniques: membership, subscription, employment, promotion, bonus, or award. The market can provide coherent digital realities from at least one of an individual, a corporation, a non-profit organization, a government, a public landmark, a park, a museum, a retail store, an entertainment event, a nightclub, a bar, a natural place or a famous destination.

In general, in an aspect, through a local electronic device, a potentially varying remote reality is presented to a user at a local place. The remote reality includes sounds or views or both that have been derived at a remote place. The remote reality is representative of varying actual experiences that a person at the remote place would have as the remote context in which that person is having the actual experiences changes. Changes in a local context in which the user at the local place is experiencing the remote reality are sensed. The presentation of the remote reality to the user at the local place is very based on the sensed changes in the local context in which the user at the local place is experiencing the remote reality. The presentation of the remote reality to the user at the local place is varied based also on the actual experience of the person at the remote place for a remote context that corresponds to the local context.

Implementations may include one or more of the following features. The local context comprises an orientation of the user relative to the local electronic device. The presentation of the remote reality is also varied based on information provided by the user at the local place. The local context comprises a direction of the face of the user. The local context comprises motion of the user. The presentation is varied continuously. The sensed changes are based on face recognition. The presentation is varied with respect to a field of view. The sensed changes comprise audio changes. The presentation is varied with respect to at least one of the luminance, hue, or contrast.

In general, in an aspect, an awareness of a potentially changing direction in which a person in the locale of an electronic device is facing is automatically maintained, and a direction of real-time image or video content is presented by the electronic device to the person is automatically and continuously changed to correspond to the changing direction of the person in the locale.

In general, in an aspect, through one or more audio visual electronic devices, at a local place associated with a user, an alternative reality is presented to the user. The alternative reality is different from an actual reality of the user at the local place. A state of susceptibility of the user to presentation of the alternative reality at the local place is automatically sensed, and the state of presentation of the alternative reality for the user is automatically controlled, based on the sensed state of susceptibility.

Implementations may include one or more of the following features. The state of susceptibility comprises a presence of the user in the locale of at least one of the audio visual devices. The state of susceptibility comprises an orientation of the user with respect to at least one of the audio visual devices. The state of susceptibility comprises information provided by the user through a user interface of at least one of the audiovisual devices. The state of susceptibility comprises an identification of the user. The state of susceptibility corresponds to a selected one of a set of different identities of the user.

In general, in an aspect, as a person approaches an electronic device on which a digital reality associated with the person can be presented to the person, the person is automatically identified. The digital reality includes live video from another location and other content portions to be presented simultaneously to the person. The electronic device is powered up in response to identifying the person. The presentation of the digital reality to the person is begun automatically. A determination of when the identified person is no longer in the vicinity of the electronic device is automatically made. The device is automatically powered down in response to the determination.

In general, in an aspect, a content broadcast facility is provided through a communication network. The broadcast facility enables users to find and access, at any location at which the network is accessible, broadcasts of real-time content that represent at least portions of alternative realities that are alternative to actual realities of the users. The content has been obtained at separate locations accessible through the network, from electronic devices at the separate locations.

Implementations may include one or more of the following features. A directory service enables at least one of the users to identify real-time content that represents at least portions of selected alternative realities of the users. Metadata of the real-time content is generated automatically. Users can find and access broadcasts of non-real-time content. Broadcasts of real-time content are provided automatically that represent at least portions of alternative realities that are alternative to actual realities of the users, according to a predefined schedule.

In general, in an aspect, live video discussion are enabled between two persons at separate locations through a communication system. At least one of the person's participation in the live video discussion includes features of an alternative reality that is alternative to an actual reality of the person. Language differences between the two people are automatically determined based on their live speech during the video discussion. The speech of one or the other or both of the two people is automatically translated in real time during the video discussion.

Implementations may include one or more of the following features. The language differences are determined based on pre-stored information. The language differences are determined based on locations of the persons with respect to the alternative reality. More than two persons are participating in the live video discussion, language differences among the persons are determined automatically, and the speech of the persons is translated in real-time automatically as different people speak. Non-speech material is translated as part of the alternative reality. Live speech is recorded during the video discussion as text in a language other than the language spoken by the speaker.

In general, in an aspect, at an electronic device that is in a local place, speech of a user is recognized, and the recognized speech is used to enable the user to participate, through a communication network that is accessible at the local place and at remote places, in one or more of the following: (a) an alternate reality of the user, (b) any of multiple identities of the user, or (c) presence of the user in a virtual place.

Implementations may include one or more of the following features. The recognized speech is used to automatically control features of the presentation of the alternate reality to the user. The recognized speech is used to determine which of the multiple identities of the user is active, and the user automatically can participate in a manner that is consistent with the determined identity. The recognized speech is used to determine that the user is present in the virtual place, and the virtual place as perceived by other users is caused to include the presence of the user.

In general, in an aspect, through an electronic device that is at a local place and has a user interface, a user is enabled to simultaneously control services available on one or more other devices at least some of which are at remote places that are electronically accessible from the local electronic device, in order to (a) participate in an alternative reality, (b) exercise an alternative presence, or (c) exercise an alternative identity.

Implementations may include one or more of the following features. The local electronic device and at least some of the multiple other devices are respectively configured to use incompatible protocols for their operation or communication or both. At least some of the services are available on the multiple other devices provide or use audio visual content. At least some of the multiple other devices are not owned by the user. At least some of the multiple other devices comprise different proprietary operating systems. Translation services are provided with respect to the incompatible protocols. At least some of the multiple other devices include control applications that respond to the control of the user at the local place. At least some of the multiple other devices include viewer applications that provide a view to the user at the local place of the status of at least one of the other devices. The user has multiple alternate identities and the user is enabled to control the services available on the multiple other devices in modes that relate respectively to the multiple alternate identities. The services comprise services available from one or more of applications. The services comprise acquisition or presentation of digital content. The services are paid for by the user. The services are not paid for by the user. The user can locate the services using the electronic device at the local place. Audio visual content is provided to or were used from the other devices. At least some of the other devices are not owned by a user of the electronic device at the local place. At least some of the other devices include control applications that respond to the electronic device at the local place. At least some of the other devices include viewer applications that provide views to a user at the local place of the status of at least one of the other devices. The services are available from one or more applications running on the other devices. The services available from the other devices comprise acquisition or presentation of digital content. The services available from the other devices are paid for by a user. The services available from the other devices are not paid for by a user. A user can locate services available from the other devices using the electronic device at the local place.

In general, in an aspect, multiple users at different places, each working through a user interface of an electronic device at a local place, can locate and simultaneously control different services available on multiple other devices at least some of which are at remote places that are electronically accessible from the local electronic device.

Implementations may include one or more of the following features. At least some of the local electronic devices and the multiple other devices are respectively configured to operate using incompatible protocols for their operation or communication or both. The registration of at least some of the other devices is enabled on a server that tracks the devices, the services available on them, their locations, and the protocols used for their operation or communication or both. The services comprise one or more of the acquisition or delivery of digital content, features of applications, or physical devices.

In general, in an aspect, from a first place, remotely controlling simultaneously, through a communication network, different types of subsidiary electronic devices located at separate other places where the communication network can be accessed. The simultaneous remote controlling comprises providing commands to and receiving information from each of the different types of subsidiary devices in accordance with protocols associated with the respective types of devices, and providing conversion of the commands and information as needed to enable the simultaneous remote control.

Implementations may include one or more of the following features. The simultaneous remote controlling is with respect to two identities of the user. Audio visual content is provided to or used from the subsidiary electronic devices. At least some of the subsidiary devices are not owned by a user who is remotely controlling. At least some of the subsidiary devices include control applications that respond to the controlling. At least some of the subsidiary devices include viewer applications that provide views to a user at the first place of the status of at least one of the subsidiary devices. The services are available from one or more applications running on the subsidiary devices. The services available from the subsidiary devices comprise acquisition or presentation of digital content. The services available from the subsidiary devices are paid for by a user. The services available from the subsidiary devices are not paid for by a user. A user can locate services available from the subsidiary devices using an electronic device at the first place.

In general, in an aspect, at a local place, portal services support an alternate reality for a user at a remote place, the portal services is arranged (a) to receive communications from the user at a remote place through a communications network, and, (b) in response to the received communications, to interact with a subsidiary electronic device at the local place to acquire or deliver content at the local place for the benefit of the user and in support of the alternate reality at the remote place. The subsidiary electronic device is one that can be used for a local function at the local place unrelated to interacting with the portal services. The owner of the subsidiary electronic device is not necessarily the user at the remote place.

In general, in an aspect, on an electronic device that provides standalone functions to a user, a process configures the electronic device to provide other functions as a virtual portal with respect to content that is associated with an alternate reality of the user or of one or more other parties. The process enables the electronic device to capture or present content of the alternate reality and to provide or receive the content to and from a networked device in accordance with a convention used by the networked device to communicate.

Implementations may include one or more of the following features. The electronic device comprises a mobile phone. The electronic device comprises a social network service. The electronic device comprises a personal computer. The electronic device comprises an electronic tablet. The electronic device comprises a networked video game console. The electronic device comprises a networked television. The electronic device comprises a networking device for a television, including a set top cable box, a networked digital video recorder, or a networking device for a television to use the Internet. The networked device can be selected by the user. A user interface associated with the networked device is presented to the user on the electronic device. The user can control the networked device by commands that are translated. The networked device also provides content to or receives content from another separate electronic device of another user at another location with respect to an alternate reality of the other user. The content presented on the electronic device is supplemented or altered based on information about the user, the electronic device, or the alternate reality.

In general, in an aspect, a user, who is one of a group of participants in an electronically managed online governance that is part of an alternative reality of the user, can compensate the governance electronically for value generated by the governance.

Implementations may include one or more of the following features. The governance comprises a commercial venture. The governance comprises a non-profit venture. The compensation comprises money. The compensation comprises virtual money, credit, or scrip. The compensation is based on a volume of activity associated with the governance. The compensation is determined as a percentage of the volume of activity. The participant may alter the compensation. The activity comprises a dollar volume of commercial transactions. Online accounts of the compensation are maintained.

In general, in an aspect, a user of an electronic device, who is located in a territory that is under repressive control of a territorial authority and whose real-world existence is repressed by the authority, can use the electronic device to be present as a non-repressed identity in an alternative reality that extends beyond the territory. The presence of the user as the non-repressed identity in the alternative reality is managed to reduce impact on the real-world existence of the user. The managing the presence of the user as the non-repressed identity comprises enabling the user to be present in the alternative reality using a stealth identity. Through the stealth identity, the user may own property and engage in electronic transactions that are associated with the stealth identity, and are associated with the user only beyond the territory that is under represssive control. Managing the presence of the user comprises providing a secure connection of the user alternative reality. Managing the presence of the user comprises enabling the user to be camouflaged or disguised with respect to the alternative reality. Managing the presence of the user comprises protecting the user's presence with respect to monitoring by the territorial authority. Managing the presence of the user comprises enabling the user to engage in electronic transactions through the alternative reality with parties who are not located within the territory.

In general, in an aspect, a user is entertained by presenting aspects of an entertainment alternative reality to the user through one or more electronic devices. The entertainment alternative reality is presented in a mode in which the user need not be a participant in or have a presence in the alternative reality or in a place where the alternate reality is hosted. The user can observe or interact with the aspects of the alternative reality as part of entertaining the user.

Implementations may include one or more of the following features. The entertaining of the user comprises presenting the aspects of the alternative reality through a commonly used entertainment medium. The entertaining of the user by presenting aspects of an entertainment alternative reality continues uninterrupted and is always available to the user. The entertainment alternative reality progresses in real-time. The entertainment alternative reality comprises an event. The aspects of the entertainment alternative reality are presented to the user through a broadcast medium. The entertaining replaces a reality that the user is not able to experience in real life. The entertainment alternative reality comprises a fictional event. The entertainment alternative reality is associated with a novel. The entertaining comprises presenting a movie. The presenting of aspects of an entertainment alternative reality comprises serializing the presenting. The two or more different users are presented aspects of an entertainment alternative reality that are custom-formed for each of the users.

Implementations may include one or more of the following features. Behavior of the user or of a population of users is changed by altering the entertaining over time. The user registers as a condition to the entertaining. The entertaining is associated with a time line or a roadmap or both. The time line or the roadmap or both are changed dynamically in connection with the entertaining. The timeline is non-linear. The entertaining uses groups of users associated with opposing sides of the entertainment alternative reality. The presenting of aspects of the entertainment alternative reality includes engaging people in real world activities as part of the entertainment alternative reality. The user plays a role with respect to the entertaining. The user adopts an entertainment identity with respect to the entertaining. The user employs her real identity with respect to the entertaining. The entertaining of the user is part of a real-world exercise for a group of users. The entertaining comprises part of a money-making venture. A group of the users comprises a money-making venture with respect to the entertaining. A group of the users incorporates as a money-making venture within the entertaining. The money-making venture with respect to the entertaining is conducted using at least one of virtual money, real money, scrip, credit, or another financial instrument. The money-making entertainment venture is associated with at least one of creating, designing, building, manufacturing, selling, or supporting commercial items or services. The entertaining is associated with a financial accounting system for the delivery and acquisition of products and services. The entertaining is associated with a financial accounting system for buying, selling, valuing, or owning at least one of virtual or goods or services. The entertaining is associated with a financial accounting system for assets of entertainment identities and real identities with respect to the entertainment. The entertaining is associated with a financial accounting system for accounts of entertainment identities and real identities that are represented by at least one of virtual money, real money, scrip, credit or another financial instrument. A system records, analyzes, or reports on the relationship of aspects of the entertaining to outcomes of the entertaining.

In general, in an aspect, a coherent digital reality is constructed based on at least one of a story, a character, a place, a setting, an event, a conflict, a timeline, a climax, or a theme of an entertainment in any medium. A user is entertained by presenting aspects of an entertainment coherent digital reality to the user through one or more electronic devices. The entertainment coherent digital reality is presented in a mode in which the user need not be a participant in or have a presence in the coherent digital reality or in a place where the coherent digital reality is hosted. The user can observe or interact with the aspects of the coherent digital reality as part of entertaining the user. The entertainment coherent digital reality comprises part of a market of coherent digital realities.

In general, in an aspect, users can participate electronically in a governance that provides value to the users in connection with one or more alternative realities, in exchange for consideration delivered by the users. Membership relationships between the users and the governance, and the flow of value to the users and consideration from the users, are managed.

Implementations may include one or more of the following features. Each of at least some of the users participate electronically in other governances. The governance is associated with a profit-making venture. The governance is associated with a non-profit venture. The governance is associated with a government. The governance comprises a quasi-governmental body that spans political boundaries of real governmental bodies. The value provided by the governance to the users comprises improved lives. The value provided by the governance to the users comprises improved communities, value systems, or lifestyles. The value provided by the governance to the users comprises a defined package that is presented to the users and has a defined consideration associated with it.

In general, in an aspect, users are electronically provided with offers to participate as members of an online governance in one or more alternative reality packages that encompass defined value for the users in terms of improved lives, communities, value systems, or lifestyles, managing participation by the users in the governance. Consideration is collected in exchange for the defined value offered by the online governance.

In general, in an aspect, information is acquired that is associated with images captured by users of image-capture equipment in associated contexts. Based on at least the acquired information, guidance is determined that is to be provided to users of the image capture equipment based on current contexts in which the users are capturing additional images. The guidance is made available for delivery electronically to the users in connection with their capturing of the additional images.

Implementations may include one or more of the following features. The current contexts comprise geographic locations. The current contexts comprise settings of the image capture equipment. The image capture equipment comprises a digital camera or digital video camera. The image capture equipment comprises a networked electronic device whose functions include at least one of a digital camera or a digital video camera. The guidance is delivered interactively with the user of the image capture equipment during the capture of the additional images. The guidance comprises part of an alternative reality in which the user is continually enabled to capture better images in a variety of contexts.

In general, in an aspect, in connection with enabling the presentation at separate locations of an alternative reality to users of electronic devices that have non-compatible operating platforms, for each of the electronic devices an interface configured to present the alternative reality to users of the electronic devices is centrally and dynamically generated. The Generated interface for each of the electronic devices is compatible with the operating platform of the device.

Implementations may include one or more of the following features. Each of the interfaces is generated from a set of pre-existing components. The pre-existing components are based on open standards. Each of the interfaces is generated from a combination of pre-existing components and custom components. The devices comprise multimedia devices. As the operating platform of each of the devices is updated, the dynamically generated interface is also updated.

In general, in an aspect, an electronic network is maintained in which information about personal, individual, specific, and detailed actions, behavior, and characteristics of users of devices that communicate through the electronic network are made available publicly to users of the devices. Users of the devices can use the publicly available information to determine, from the information about actions, behavior, and characteristics of the users, ways to enable the users of the devices to improve their performance or reduce their failures with respect to identified goals.

Implementations may include one or more of the following features. The ways to improve comprise commercial products. The actions, behavior, and characteristics of the users individually are tracked over time. The improvement of performance or reduction of failure is reported about individual users and about users in the aggregate. The ways to improve performance or reduce failure are provided through an online platform accessible to the users through the network. Users of the devices can manage their goals. The managing their goals comprises registering, defining goals, setting a baseline for performance, and receiving information about actual performance versus baseline. The ways to enable the users of the devices to improve their performance or reduce their failures are updated continually. Users are informed about the ways to improve by delivering at least one of advertising, marketing, promotion, or online selling. The ways to improve comprise enabling a user who is making an improvement as part of an alternative reality to associate in the alternative reality with at least one other user who is making a similar improvement.

In general, in an aspect, a user of an electronic device is engaged in a reality that is an alternative to the one that she experiences in the real world at the place where she is located, by automatically presenting to her an always available multimedia presentation that includes recorded and real-time audio and video captured through other electronic devices at multiple other locations and is delivered to her through a communication network. The multimedia presentation includes live video of other people at other locations who are part of the alternative reality and video of places that are associated with the alternative reality. The user is given a way to control the presentation to suit her interests with respect to the alternative reality.

In general, in an aspect, a person can have a presence in an online world that is an alternative to a real presence that the person has in the real world. The alternative presence is persistent and continuous and includes aspects represented by real-time audio or video representations of the person and other aspects that are not real-time audio or video representations and differ from features of the person's real presence in the real world. The person's alternative presence is accessible by other people at locations other than the real world location of the person, through a communication network.

In general, in an aspect, through multimedia electronic devices and a communication network, a user can exist as one or more multiple selves that are alternates to her real self in the real world locale in which she is present. The multiple selves include at least some aspects that are different from the aspects of her self in the real world locale in which she is present. The multiple selves can be present in multiple remote places in addition to the real world locale. She can select any one or more of the multiple selves to be active at any time and when her real self is present in any arbitrary real world locale at that time.

In general, in an aspect, a person can electronically participate with other people in an alternative reality, by using at least one electronic device at the place where the person is located, and other electronic devices located at other places and accessible through a communication network. The alternative reality is conveyed to the person through the electronic device in such a way as to present an experience for the person that is substantially different from the physical reality in which the person exists, and exhibits the following qualities that are similar to qualities that characterize the physical reality in which the person exists: the alternative reality is persistent; audio visual; compelling; social; continuous; does not require any action by the person to cause it to be presented; has the effect of altering behavior, actions, or perceptions of the person about the world; and enables the person to improve with respect to a goal of the person.

These and other aspects, features, and implementations, and combinations of them, can be expressed as methods, systems, compositions, devices means or steps for performing functions, program products, media that store instructions or databases or other data structures, business methods, apparatus, components, and in other ways.

These and other aspects, features, advantages, and implementations will be apparent from the prior and following discussion, and from the claims.

FIG. 1 is a pictorial diagram that illustrates a history timeline that diverges during a period of digital discontinuities that begin to produce the emergence of an Alternate Reality Teleportal Machine (ARTPM) and the Expandaverse.

FIG. 2 is a graphical illustration that expands the period of digital discontinuities to show simultaneous and cyclical transformations in digital technologies, organizations and cultures, with AnthroTectonic shifts in numerous basic assumptions.

FIG. 3 is a pictorial diagram that briefly summarizes some components of an Alternate Reality Teleportal Machine (ARTPM).

FIG. 4 is a pictorial diagram that illustrates physical reality (prior art).

FIG. 5 is a pictorial diagram that illustrates how a single person may choose to create a growing number of alternate realities (Expandaverse), some of whose options include multiple identities; multiple Shared Planetary Life Spaces (SPLS's); and utilizing multiple constructed digital realities, digital presence events, etc.

FIG. 6 is a pictorial diagram that illustrates some components and processes of the ARTPM's Alternate Realities Machine (ARM), especially introducing ARM boundaries and boundaries management.

FIG. 7 is a pictorial diagram that illustrates current networked electronic devices, in some examples described in the ARTPM as “subsidiary devices” (prior art).

FIG. 8 is a pictorial diagram that illustrates ARTPM devices and the Teleportal Utility (TPU).

FIG. 9 is a schematic diagram that illustrates a high-level views of some connections and interactions, including a consistent adaptive user interface across many ARTPM devices.

FIG. 10 is a pictorial diagram that illustrates some examples of controlling main TP devices and how they connect and interact.

FIG. 11 is a hierarchical chart that illustrates a logical summary grouping of some main components in the ARTPM.

FIG. 12 is a hierarchical chart that illustrates a logical summary grouping of some devices components in the ARTPM.

FIG. 13 is a hierarchical chart that illustrates a logical summary grouping of some digital realities components in the ARTPM.

FIG. 14 is a hierarchical chart that illustrates a logical summary grouping of some utility components in the ARTPM.

FIG. 15 is a hierarchical chart that illustrates a logical summary grouping of some services and systems components in the ARTPM.

FIG. 16 is a hierarchical chart that illustrates a logical summary grouping of some entertainment components in the ARTPM.

FIG. 17 is a pictorial diagram that illustrates some examples of more detailed descriptions of the main Teleportal (TP) devices and categories; and in some examples their combination as a new architecture for individual access and control over various types of networked electronic devices.

FIG. 18 is a pictorial diagram that illustrates some TP devices and components, and includes some examples of how they work together.

FIGS. 19 through 25 are pictorial diagrams that illustrate some styles for Local Teleportal devices including windows, wall pockets, shapes, frames, multiple integrated Teleportals, and Teleportal walls.

FIG. 26 is a pictorial diagram that illustrates some styles for Mobile Teleportals devices including mobile phone styles, tablet and pad styles, portable communicator styles, netbook styles, laptop styles, and portable projector styles.

FIGS. 27 and 28 are pictorial diagrams that illustrate some styles for Remote Teleportals devices including some fixed location styles and mobile location styles such as on land, in the water, in the air, and potentially in space.

FIG. 29 is a block diagram showing an example architecture of a Teleportal device that combines digital realities creation with communications, broadcasting, remote control, computing, display and other capabilities.

FIG. 30 is a flow chart showing some procedures for determining Teleportal processing locations based on the capabilities of each device.

FIG. 31 is a block diagram showing some processing flows in a Teleportal device.

FIG. 32 is a block diagram showing some processing flows of receiving broadcasts and broadcasting, which in some examples may include watching, recording, editing, digitally altering, synthesizing, broadcasting, etc.

FIG. 33 is a block diagram showing some simultaneous multiple processes in Teleportal processing.

FIG. 34 is a block diagram showing some examples of Teleportal processing within one device and/or within a plurality of devices, the utilization of remote resources in processing, multiple devices' processing of the same focused connection, etc.

FIG. 35 is a flow chart showing some examples of commands entry to some Teleportal devices, with the addition of new I/O.

FIG. 36 is a pictorial block diagram showing an example universal remote control for some Teleportal devices.

FIG. 37 is a flow chart showing some examples of procedures for a universal remote control interface.

FIG. 38 is a pictorial block diagram showing some examples of the construction of digital realities, in this example by a Remote Teleportal.

FIG. 39 is a block diagram showing some examples of the construction of a digital reality, and its subsequent reconstructions by a plurality of devices, including utilizing network interception.

FIG. 40 is a block diagram showing some examples of digital realities construction processes, resource sources, and resources.

FIG. 41 is a flow chart showing some examples of procedures for broadcasting digital realities, monetizing broadcasted digital realities, and validating monetization steps in order to receive revenues.

FIG. 42 is a flow chart showing some examples of procedures for sponsoring (such as advertising) on constructed digital realities, receiving data from broadcasted digital realities, collecting monies from sponsors, and providing growth information and systems to creators/broadcasters of digital realities.

FIG. 43 is a flow chart showing some examples of procedures for integrating constructed digital realities with ARM boundaries management.

FIG. 44 is a pictorial block diagram showing some examples of the operation of a Superior Viewer Sensor (SVS).

FIG. 45 is a pictorial block diagram that illustrates some examples of the dynamic viewing provided by a Superior Viewer Sensor (SVS).

FIG. 46 is a flow chart showing some examples of procedures for providing dynamic SVS viewing.

FIG. 47 is a diagram illustrating some examples of changing an SVS view in consequence with the amount of horizontal movement by a viewer relative to a display.

FIG. 48 is a diagram illustrating some examples of changing an SVS view in consequence with changes in a viewer's distance from a display.

FIG. 49 is a pictorial block diagram that illustrates some examples of a continuous digital reality that is present in response to the presence of a specific identity.

FIG. 50 is a pictorial block diagram that illustrates some examples of publishing TP broadcasts (such as in some examples constructed digital realities from TP devices) so they may be found and used by others (such as in some examples from websites, databases, Electronic Program Guides, channels, networks, etc.).

FIG. 51 is a pictorial block diagram that illustrates some examples of language translation so that people who speak different languages may communicate directly, in some examples with automated recognition so the translation facility is transparent to use.

FIG. 52 is a pictorial block diagram that illustrates some examples of speech recognition interactions for control and use.

FIG. 53 is a pictorial block diagram that illustrates some examples of speech recognition processing that may be performed locally and/or remotely.

FIG. 54 is a flow chart showing some examples of procedures for optimization of speech recognition.

FIG. 55 is a pictorial block diagram that illustrates some examples of an overall architecture summary of subsidiary devices including some examples of subsidiary devices, device components, and devices data.

FIG. 56 is a pictorial diagram showing some examples of one identity simultaneously utilizing a plurality of subsidiary devices.

FIG. 57 is a flow chart showing some examples of procedures for one person with a plurality of identities selecting and using subsidiary devices.

FIG. 58 is a pictorial block diagram that illustrates some examples of control and data processes for accessing and using a plurality of types of subsidiary devices.

FIG. 59 is a flow chart showing some examples of procedures for retrieving protocols, and/or generating a protocol, for subsidiary device communication and/or control.

FIG. 60 is a block diagram showing some examples of utilizing a control application, a viewer application, and/or a browser to use a subsidiary device(s).

FIG. 61 is a flow chart showing some examples of procedures for initiating and running a subsidiary device control and/or viewer application.

FIG. 62 is a flow chart showing some examples of procedures for controlling a subsidiary device.

FIG. 63 is a flow chart showing some examples of procedures for translating inputs and outputs between a controlling device and a subsidiary device.

FIG. 64 is a pictorial diagram that illustrates some examples of a Virtual Teleportal (VTP) on a plurality of Alternate Input Devices/Alternate Output Devices (AIDs/AODs).

FIG. 65 is a pictorial block diagram that illustrates some examples of VTP processing on AIDs/AODs.

FIG. 66 is a flow chart and pictorial diagram showing some examples of initiating VTP connections with TP devices.

FIG. 67 is a flow chart showing some examples of procedures for VTP processing on TP devices.

FIG. 68 is a flow chart showing some examples of procedures for registering subsidiary devices (SD) and/or SD functions (such as applications, content, services, etc.) on an SD Server where they may be accessed for use.

FIG. 69 is a flow chart showing some examples of procedures for finding and using SD's by means of an SD Server, including sponsor/advertising systems, accounting systems to collect revenues and pay SD owners, and growth systems to increase usage and/or revenues.

FIGS. 70, 71 and 72 are a pictorial block diagrams that illustrate some examples of TP digital presence for personal uses (70), commercial uses (71), and mobile uses (72).

FIG. 73 is a block diagram that illustrates some examples of a TP presence architecture.

FIG. 74 is a flow chart showing some examples of procedures for TP connections (identities) including opening a Shared Planetary Life Space (SPLS).

FIG. 75 is a flow chart showing some examples of procedures for TP connections to and opening PTR (places, tools, resources, etc.).

FIG. 76 is a diagram showing some examples of some TP connections steps with IPTR (identities, places, tools, resources, etc.).

FIG. 77 is a pictorial diagram and flow chart showing the focusing of a TP connection.

FIG. 78 is a block diagram that illustrates some examples of media options in a focused connection, or in some examples in SPLS connections.

FIG. 79 is a flow chart showing some examples of dynamic presence awareness to make focused connections.

FIG. 80 is a block diagram that illustrates some examples of individual(s) control of presence boundary(ies).

FIG. 81 is a block diagram that illustrates some examples of digitally combining TP presence and a place.

FIG. 82 is a block diagram showing some examples of options for presence at a place such as in some of the examples syntheses when sending/receiving, when receiving/sending, by means of network alterations, and by substituting an altered reality at a source.

FIG. 83 is a flow chart showing some examples of procedures for TP addition of place(s) and/or content to a focused connection.

FIG. 84 is a flow chart showing some examples of procedures for the processing of a digital place(s).

FIG. 85 is a block diagram showing some examples of a TP audience(s) interacting at a place(s).

FIG. 86 is a block diagram illustrating scalability and fault tolerance for TP presence, TP resources, TP events, etc.

FIG. 87 is a flow chart showing some examples of procedures for finding digital presence events (such as a PlanetCentral or GoPort, search, alerts, top lists, APIs, portals, etc.), attending an event (including free or paid admission systems), and monetizing suddenly popular free events.

FIG. 88 is a flow chart showing some examples of procedures for filtering any digital presence with people such as in some examples a filtered display of only some people (based on a common attribute), and in some examples retrieving data (whatever is permitted from each request) on the people displayed based on a common attribute (such as name, address, credit score, net worth, etc.)

FIG. 89 is a pictorial diagram showing current reality (prior art) compared to some examples of the Alternate Realities Machine (ARM), illustrating some ARM control levels.

FIG. 90 is a pictorial block diagram illustrating some examples of how a person may have multiple (ARM) identities, multiple (ARM) SPLS(s) and ARM boundary management for each SPLS.

FIG. 91 is a pictorial diagram illustrating some examples of an identity with an SPLS (Shared Planetary Life Space) that includes identities, places, tools, resources, subsidiary devices, etc.

FIG. 92 is a pictorial diagram illustrating some examples of a Local Teleportal display.

FIG. 93 is a pictorial diagram illustrating some examples of a Mobile Teleportal display.

FIGS. 94 and 95 are a pictorial diagram illustrating some examples of a Virtual Teleportal display.

FIG. 96 is a flow chart showing some examples of procedures for selecting an identity and/or an SPLS (Shared Planetary Life Space).

FIG. 97 is a flow chart showing some examples of procedures for an identity's SPLS services.

FIG. 98 is a flow chart showing some examples of procedures for a private identity(ies) and/or a secret identity(ies) SPLS services.

FIG. 99 is a flow chart showing some examples of procedures for groups' SPLS services, whether for their members' public, private and/or secret identities.

FIG. 100 is a flow chart showing some examples of procedures for public SPLS services.

FIG. 101 is a pictorial block diagram illustrating some examples that summarize an ARM directory.

FIG. 102 is a block diagram showing some examples of ARM directory(ies) processes, data storage, lookup services, analyses/reporting, etc.

FIG. 103 is a block diagram showing some examples of an abstracted ARM directory(ies) architecture.

FIG. 104 is a block diagram showing some examples of entering, retrieving and processing directory entries.

FIG. 105 is a block diagram showing some examples of using and updating directory data.

FIG. 106 is a block diagram showing some examples of directory search and browsing interfaces for IPTR.

FIG. 107 is a pictorial block diagram and flowchart showing some examples of optimizing searching and browsing interfaces.

FIG. 108 is a flow chart showing some examples of procedures for selecting IPTR, connecting to it, making it part of a shared space, etc.

FIG. 109 is a flow chart showing some examples of procedures for adding and/or editing the IPTR in a shared space.

FIG. 110 is a block diagram showing some examples of directories reporting and/or recommendation processes.

FIG. 111 is a block diagram and flowchart showing some examples of recommendation processes that support rapid switching to improvements by a plurality of users, such as in some examples actionable choices to help achieve personal and/or group goals or tasks.

FIG. 112 is a flow chart showing some examples of procedures for selecting and opening an outbound shared space(s) including connecting to IPTR.

FIG. 113 is a flow chart showing some examples of procedures for opening an outbound or inbound shared space(s) with previous state retrieval (if needed).

FIG. 114 is a flow chart showing some examples of procedures for actions when an outbound shared space IPTR is not available.

FIG. 115 is a flow chart showing some examples of procedures for inbound shared space(s) connections, including SPLS boundary manager service(s).

FIG. 116 is a flow chart showing some examples of procedures for an inbound shared space connection request including in some examples add to SPLS, paywall, filter, and/or protection.

FIG. 117 is a flow chart showing some examples of procedures for managing a paywall boundary.

FIG. 118 is a flow chart showing some examples of procedures for performing paywall criteria, receiving paywall payments, paywall reports, etc.

FIG. 119 is a pictorial block diagram illustrating an example of validating paywall criteria.

FIG. 120 is a flow chart showing some examples of procedures for priorities and/or filters processing.

FIG. 121 is a flow chart showing some examples of procedures for TP protection services for individuals (identities), groups and the public.

FIG. 122 is a flow chart showing some examples of procedures for protection services for individuals, including in some examples prioritize/filter, paywall, reject, block/protect.

FIG. 123 is a flow chart showing some examples of procedures for protection services for groups, including in some examples prioritize/filter, paywall, reject, block/protect.

FIG. 124 is a flow chart showing some examples of procedures for protection services for the public, including in some examples value, act, protect.

FIG. 125 is a flow chart showing some examples of procedures for automated setting, updating or editing of boundaries, including in some examples paywalls, priorities, filters, protections, etc.

FIG. 126 is a flow chart showing some examples of procedures for retrieving, analyzing and displaying tracked boundary(ies) metrics.

FIG. 127 is a pictorial diagram illustrating an example of setting ARM boundaries automatically (group example: “Green Planet” Environmental Governance).

FIG. 128 is a flow chart showing some examples of procedures for manual setting, updating or editing of boundaries, including retrieving and applying “best available” choices including in some examples paywalls, priorities, filters, protections, etc.

FIG. 129 is a pictorial diagram illustrating an example of setting ARM boundaries manually (group example: “Green Planet” Environmental Governance).

FIG. 130 is a flow chart showing some examples of procedures for a property protection devices for interactive properties, locations, devices, etc.

FIG. 131 is a pictorial diagram that briefly summarizes some components of an Alternate Reality Teleportal Machine (ARTPM), highlighting the Teleportal Utility(ies).

FIG. 132 is a block diagram illustrating an example of elements in some global technologies (prior art).

FIG. 133 is a block diagram illustrating an example of factored common elements in some global technologies (prior art), to identify “utility” elements.

FIG. 134 is a pictorial block diagram illustrating a summary example of common elements, services and transport in a Teleportal Utility(ies) (TPU).

FIG. 135 is a pictorial block diagram illustrating a TPU (Teleportal Utility[ies]) overview.

FIG. 136 is a pictorial block diagram illustrating some examples of TPU security and privacy.

FIG. 137 is a pictorial block diagram illustrating some examples of TPU data sharing.

FIG. 138 is a pictorial block diagram illustrating some examples of TPU messaging and metering.

FIG. 139 is a graphical diagram illustrating some examples of TPU managed transport and latency.

FIG. 140 is a pictorial block diagram illustrating some examples of TPU managed transport—differentiated services.

FIG. 141 is a pictorial block diagram illustrating some examples of TPU managed transport—differentiated session services.

FIG. 142 is a pictorial block diagram illustrating some examples of TPU managed transport—optimizing service quality.

FIG. 143 is a pictorial block diagram illustrating some examples of TPU managed transport—bandwidth reduction, multicast and unicast.

FIG. 144 is a pictorial block diagram illustrating some examples of TPU managed transport—bandwidth reduction, multicast broadcast.

FIG. 145 is a pictorial block diagram illustrating some examples of TPU managed transport—bandwidth reduction, compression.

FIG. 146 is a pictorial block diagram illustrating some examples of TPU OS's.

FIG. 147 is a pictorial block diagram illustrating some examples of TPU servers, storage and load balancing.

FIG. 148 is a pictorial block diagram illustrating some examples of current non-virtual applications (prior art).

FIG. 149 is a pictorial block diagram illustrating some examples of TPU virtual applications.

FIG. 150 is a pictorial block diagram illustrating some examples of TPU virtual architecture.

FIG. 151 is a pictorial block diagram illustrating some examples of a TPU optimization gateway (TPOG, or Teleportal Optimized Gateway).

FIG. 152 is a pictorial block diagram illustrating some examples of TPU AID/AOD (Alternative Input Device/Alternative Output Device) sessions.

FIG. 153 is a block diagram illustrating some examples of TPU events services processes.

FIG. 154 is a block diagram illustrating some examples of TPU services bus/hubs.

FIG. 155 is a block diagram illustrating some examples of TPU services architecture

FIG. 156 is a block diagram illustrating some examples of TPU improvements processes.

FIG. 157 is a flow chart showing some examples of procedures for a one TP sign-on service and/or process.

FIG. 158 is a pictorial block diagram illustrating some examples of TPU devices management.

FIG. 159 is a pictorial block diagram illustrating some examples of TPU new devices discovery.

FIG. 160 is a flow chart showing some examples of procedures for devices configuration, including both automated and manual configurations.

FIG. 161 is a flow chart showing some examples of procedures for new device user identification, automated configuration, and configuration distribution.

FIG. 162 is a block diagram illustrating some examples of TPU differentiated services revenues.

FIG. 163 is a pictorial block diagram illustrating some examples of TPU business services communications with the public, customers, vendors and partners.

FIG. 164 is a flow chart showing some examples of procedures for a TPU business systems architecture.

FIG. 165 is a flow chart showing some examples of procedures for an example TPU customer billing system simultaneously accessible to customers, vendors, partners, and TP services; enabling appropriate data retrieval, payments and revenues for each party.

FIG. 166 is a table illustrating some examples of current uses of personal identities (prior art).

FIG. 167 is a block diagram illustrating some examples of multiple identities by identity service(s), identity server(s), etc.

FIG. 168 is a table illustrating some examples of multiple identities for one person.

FIG. 169 is a pictorial diagram illustrating an example of a user's identities management.

FIG. 170 is a block diagram showing some examples of an abstracted architecture for identity service(s), identity server(s), etc.

FIG. 171 is a flow chart showing some examples of procedures for setup and/or single sign-on for multiple identities and their services, devices, vendors, etc.

FIG. 172 is a flow chart showing some examples of procedures for a gateway, authentication, authorization and resources use by multiple identities.

FIG. 173 is a flow chart showing some examples of procedures for a person's multiple identities ownership of assets and property with authentication and auditing.

FIG. 174 is a flow chart showing some examples of procedures for setup of devices for use by multiple identities.

FIG. 175 is a flow chart showing some examples of procedures for the simultaneous use of a device by multiple identities.

FIG. 176 is a block diagram illustrating some examples of TPU applications services—sources of applications and services.

FIG. 177 is a block diagram illustrating some examples of TPU applications services—simple and complex applications.

FIG. 178 is a block diagram illustrating some examples of TPU applications services—multiple sources of applications, services and/or processes.

FIG. 179 is a block diagram illustrating some high-level examples of a customer-vendor lifecycle of TPU applications.

FIG. 180 is a flow chart showing some examples of TPU procedures and processes to run applications.

FIG. 181 is a flow chart showing some examples of TPU processes to run applications including device capability confirmation, and metering events.

FIG. 182 is a flow chart showing some examples of procedures for selecting and running TPU applications/application services.

FIG. 183 is a pictorial diagram showing some examples of the reality of current interfaces (prior art) compared to some examples of a consistent, adaptable TP interface for digital devices—a user experience transformation from a TP devices architecture.

FIG. 184 is a flow chart showing some examples of procedures for a TP devices interface service that adapts to different networked electronic devices.

FIG. 185 is a flow chart showing some examples of procedures for an adaptive user interface.

FIG. 186 is a block diagram showing some examples of adaptive interface components processes that include interface design, use, delivery, sources, repository(ies), metering and improvements.

FIG. 187 is a block diagram showing some examples of adaptive interface presentation.

FIG. 188 is a pictorial diagram showing some examples of the difference between current “competition” and pressures for differentiation/incompatibility (prior art) compared to TPU “frendition” of competition with an evolving framework/platform.

FIG. 189 is a block diagram showing some examples of ecosystem processes that align buying and using with planning, developing and selling.

FIG. 190 is a pictorial diagram showing some examples of TPU information exchange.

FIG. 191 is a block diagram and flow chart showing some examples of procedures for TPU data and revenue flows.

FIG. 192 is a block diagram showing some examples of the TPU infrastructure for new TP innovation (technologies, networks, devices, hardware, services, applications, etc.).

FIG. 193 is a block diagram and flow chart showing some high-level examples of the Active Knowledge Machine (AKM).

FIG. 194 is a flow chart showing some high-level examples of procedures for Active Knowledge (AK) processes.

FIG. 195 is a flow chart showing some high-level examples of procedures for AKM and AK interactions.

FIG. 196 is a flow chart showing some examples of procedures for active knowledge processes of identified users.

FIG. 197 is a block diagram showing some examples of AKM's parallel doing/storage/access structures.

FIG. 198 is a flow chart showing some examples of procedures for AKM performance analysis and escalation.

FIG. 199 is a flow chart showing some examples of procedures for AKM analysis and comparisons (trigger-based or user request-based).

FIG. 200 is a flow chart showing some examples of procedures for AKM user action(s) logging.

FIG. 201 is a diagram showing some examples of an AKM user performance record.

FIG. 202 is a flow chart showing some examples of procedures for AKM access knowledge resources service.

FIG. 203 is a pictorial block diagram and flow chart showing some examples of procedures for determining AK baseline(s) and gap analysis.

FIG. 204 is a flow chart showing some examples of procedures for optimization to select and deliver best AKI and AK resources, such as in some examples for continuous improvement, and in some examples to make AKM value visible.

FIG. 205 is a flow chart showing some examples of procedures for an AKM subscriber Quality of Life (QoL) improvement process.

FIG. 206 is a flow chart showing some examples of procedures for editing AKM QoL (Quality of Life) options.

FIG. 207 is a block diagram showing some examples of AK (Active Knowledge) content sources and construction.

FIG. 208 is a flow chart showing some examples of procedures for AKM message construction and display.

FIG. 209 is a pictorial block diagram and flow chart showing some examples of procedures for a device environment that is decentralized (e.g., fits some devices).

FIG. 210 is a pictorial block diagram and flow chart showing some examples of procedures for a device environment that is centralized (e.g., fits some devices).

FIG. 211 is a pictorial block diagram and flow chart showing some examples of procedures for a device environment that is a hybrid and uses intermediate/transition devices (e.g., fits some devices).

FIG. 212 is a flow chart showing some examples of procedures for adding and/or updating an AKM device, and/or a transition device.

FIG. 213 is a flow chart showing some examples of procedures for device outbound communications.

FIG. 214 is a flow chart showing some examples of procedures for device inbound communications.

FIG. 215 is a flow chart showing some examples of procedures for AKM multimedia recognition and matching.

FIG. 216 is a flow chart showing some examples of procedures for AKM triggers hierarchy and triggers processes.

FIG. 217 is a flow chart showing some examples of procedures for AKM triggers flows.

FIG. 218 is a flow chart showing some examples of procedures for AKM triggers self-service management.

FIG. 219 is a flow chart showing some examples of procedures for editing some AKM triggers options.

FIG. 220 is a flow chart showing some examples of procedures for AKM automated alerts, including free and/or paid AKM service(s).

FIG. 221 is a flow chart showing some examples of procedures for calculating AKM reporting and/or dashboards.

FIG. 222 is a pictorial diagram illustrating an example of AKM reporting by category, for an anonymous user.

FIG. 223 is a pictorial diagram illustrating an example of AKM reporting by category, for an identified user, and/or a paid service(s).

FIG. 224 is a pictorial diagram illustrating an example of an AKM dashboard for anonymous users.

FIG. 225 is a pictorial diagram illustrating an example of an AKM dashboard for an identified users, and/or a paid service(s).

FIG. 226 is a flow chart showing some examples of procedures for comparative reporting.

FIG. 227 is a pictorial diagram illustrating some examples of AKM reporting for product vendors and/or their customers.

FIG. 228 is a flow chart showing some high-level examples of procedures for AKM optimizations.

FIG. 229 is a flow chart showing some examples of procedures for AKM optimization “sandbox” testing, including optimization process improvements.

FIG. 230 is a pictorial diagram illustrating some examples of AKM optimizations data sources and resources.

FIG. 231 is a flow chart showing some examples of procedures for AKM optimizations manual rating and/or feedback system(s).

FIG. 232 is a flow chart showing some examples of procedures for AKM dynamic content addition/editing.

FIG. 233 is a flow chart showing some examples of procedures for AKM methods for editing/creating AKI (Active Knowledge Instructions)/AK (Active Knowledge).

FIG. 234 is a block diagram illustrating some examples of media and tools for AKI/AK content creation.

FIG. 235 is a flow chart showing some examples of procedures for AKM method(s) to access non-AKM AKI/AK.

FIG. 236 is a flow chart showing some examples of procedures for AKM API(s) for creating or editing devices instructions (“direct AKI” to automate tasks).

FIG. 237 is a flow chart showing some examples of procedures for AKM content or error management.

FIG. 238 is a flow chart showing some examples of procedures for an AKM optimizations ecosystem.

FIG. 239 is a flow chart showing some examples of procedures for some outputs of an AKM optimizations ecosystem, such as identifying and making visible “best” and “worst” choices based on actual behavior and use.

FIG. 240 is a flow chart showing some examples of resources for data acquisition in AKM optimizations ecosystem.

FIG. 241 is a flow chart showing some example areas and some example options for conducting AKM optimizations.

FIG. 242 is a flow chart showing some examples of procedures for AKM predictive analytics, including Economic Value Added (EVA) estimates.

FIG. 243 is a flow chart showing some examples of procedures for editing and/or associating user(s), vendor and/or Governances profile(s), record(s) and identity(ies) management.

FIG. 244 is a flow chart showing some examples of procedures for AKM goal(s) self-service controls.

FIG. 245 is a flow chart showing some examples of procedures for vendor and/or Governances “packages” sales that include AKM services for assured customer success.

FIG. 246 is a flow chart showing some examples of procedures for AKM continuous visibility of success/failure by goals/“packages” customers.

FIG. 247 is a block diagram illustrating some examples of AKM tracking and measurement of success/failure by goals/“packages” customers, and AKM optimizations and improvements based on results.

FIG. 248 is a flow chart showing some examples of a Governance(s) for individuals, herein an “IndividualISM” that supports personalized and decentralized self-governance(s).

FIG. 249 is a flow chart showing some examples of a Governance(s) by corporations, herein a “CorporatISM” that supports economic lock-in at satisfying consumption levels by means of comprehensive “packages” designed to solve numerous consumer needs in single “packages” at tiered, fixed prices.

FIG. 250 is a flow chart showing some examples of a Governance(s) for groups, herein a “WorldISM” that is centralized, trans-border and supports collective actions in broad areas such as environmentalism, health, humanitarianism, religion and ethnicity.

FIG. 251 is a flow chart showing some examples of procedures for a Governances revenue system (GRS), providing in some examples self-determined means to automatically support one or more Governances financially, in some examples with control by individuals who can slow or stop funding if a Governance is ineffective or fails to produce results.

FIG. 252 is a flow chart showing some examples of some procedures for a freedom from dictatorships system—opening a free (stealth) identity's communications.

FIG. 253 is a flow chart showing some examples of some procedures for a freedom from dictatorships system—monitoring and protecting a free (stealth) identity's communications, and opening and closing a free identity's (stealth) SPLS's and/or connections.

FIG. 254 is a flow chart showing some examples of some procedures for a freedom from dictatorships system—tasks performed by a free (stealth) identity outside the country in which they are oppressed.

FIG. 255 is a block diagram illustrating some examples of AKM systems operating in and with photographic devices.

FIG. 256 is a flow chart showing some examples of some procedures for AKM initial use(s) of a device—digital camera.

FIG. 257 is a flow chart showing some examples of some procedures for retrieving the AKI/AK needed for initial device use(s)—digital camera.

FIG. 258 is a flow chart showing some examples of some procedures for AKM new features learning in a device—digital camera.

FIG. 259 is a flow chart showing some examples of some procedures for optimizations and continuous improvement of “best available” AKI/AK retrieved to continuously improve device use(s)—digital camera.

FIG. 260 is a flow chart showing some examples of some procedures for AKM domain learning from a device—digital camera.

FIG. 261 is a flow chart showing some examples of some procedures for vendors to transform devices from AKM use(s)—digital camera.

FIG. 262 is a block diagram and flow chart showing some examples of some procedures for selling and/or using a “goals package”—a digital camera as a vacation camera, or “VacationCam.”

FIG. 263 is a block diagram illustrating some examples of AKM device communications—digital camera.

FIG. 264 is a block diagram illustrating some examples of Governances processes.

FIG. 265 is a block diagram illustrating some examples of a CorporatISM Governance example—upward mobility to lifetime luxury “package.”

FIG. 266 is a block diagram illustrating some examples of an IndividualISM Governance example—one or more ‘Customers In Control, Inc.’).

FIG. 267 is a block diagram illustrating some examples of AKM transformations as a driver of humanity's success.

FIG. 268 is a block diagram illustrating some examples of AnthroTectonics: continuous AKM transformations of devices and Governances.

FIG. 269 is a flow chart showing some examples of some options for using Reality Alternate technologies, in some examples in entertainment products, in some examples as extensions to entertainment products, and in some examples as expansions of entertainment products.

FIG. 270 is a flow chart showing some examples of a new form of online entertainment, “RealWorld Entertainment” (RWE), which blends games with the real world, blends income producing economic activity within games with the real world, and crosses boundaries between how games operate and affect the real world.

FIG. 271 is a graphical diagram showing some examples of the RWE's (RealWorld Entertainment's) roadmap and timeline, which is the ARTPM Alternate Reality history and Expandaverse on which the Reality Alternate technologies are based.

FIG. 272 is a graphical diagram showing some examples of the RWE's timeline in both the ARTPM's “history” and in the RWE's play and real activities.

FIG. 273 is a block diagram showing some examples of the RWE's non-linear timeline, which in some examples “players” can enter at any stage of the ARTPM Alternate Reality's history.

FIG. 274 is a block diagram showing some examples of the RWE's roles, world views and types of governances.

FIG. 275 is a block diagram showing some examples of entering the RWE's by choosing an identity(ies), timeline, stage, conflict, world view, Governance and style.

FIG. 276 is a flow chart showing some examples of some procedures for accessing the RWE.

FIG. 277 is a flow chart showing some examples of some procedures for logging in to the RWE, or in some examples registering as a real player, in some examples applying for a real paid job as a player, in some examples as an unpaid game player, in some examples as a virtual non-real employee, or in some examples in another way of joining and/or entering the RWE.

FIG. 278 is a flow chart showing some examples of some procedures for using the RWE including some examples of making, buying and selling real RWE goods or services, or virtual RWE goods or services with real money, virtual money, scrip or another financial instrument; and in some examples having an RWE financial account that may contain real money, virtual money, scrip, assets, liabilities or another financial instrument.

FIG. 279 is a block diagram showing some examples of RWE groups building Reality Alternate technologies or performing other commercial activities for the RWE and/or for the real world in order to produce sales and earn virtual and/or real money; and in some examples companies outside the RWE building those technologies for money.

FIG. 280 is a flow chart showing some examples of some procedures for using Reality Alternate technologies for no cost and no license fee within the RWE.

FIG. 281 is a flow chart showing some examples of some procedures for an RWE “play” member or group evolving into an “RWE real” member or group that is paid in real money and earns real income.

FIG. 282 is a flow chart showing some examples of some procedures for transitioning from an RWE “play” group (or individual) to an “RWE real” group that can earn real money and employ Reality Alternate technologies in a plurality of licensed activities.

In the examples the components may consist of any combination of devices, components, modules, systems, processes, methods, services, etc. at a single location or at multiple locations, wherein any location or communication network(s) includes any of various hardware, software, communication, security or other components. A plurality of examples that incorporate these examples may be constructed and included or integrated into other devices, applications, systems, components, methods, processes, modules, hardware, platforms, utilities, infrastructures, networks, etc.

Turning now to FIG. 1, “Emergence of Expandaverse and Alternate Realities,” this Alternate Reality has the same history as our current reality before the development of digital technologies, but then diverged with the Alternate Reality emerging as a different digital evolution during the recent digital environment revolution. After that the realities diverged with the “history” of the Expandaverse developing and using new technologies whose goal is to deliver a higher level(s) of human success and connections as a normal network process—just as you can plug any electric appliance in a standard wall outlet and receive power, the Expandaverse's reality developed a new type of “Teleportal Utility,” “Teleportal Devices” and ARTPM components that provide success, presence and much more—which in this Alternate Reality, alters the success and quality of life of individuals, groups, corporations and businesses, governments and nations, and human civilization.

As depicted in FIG. 1 four views of this Alternate Reality's history are illustrated simultaneously. The Alternate Reality's Cosmology 6 12, Stages of History 7 21, Wealth System 8 24 and Culture system 9 27 diverged from our current reality recently, starting with Digital Discontinuities 20 that occur during the recent digital era. This Alternate History posits a series of conceptual reversals 20 plus expansions beyond physical reality 20 that are described in more detail in FIG. 2 (which divides the discontinuities into three sub-stages: Technological discontinuities, Organizational discontinuities, and Cultural discontinuities) and elsewhere.

The reasons for the Digital Discontinuities 20 is that digital technology provides new means—technologies that can be designed and combined at new levels such as in some examples meta-systems—to define and control human reality, whether as one reality or as multiple simultaneous alternate realities. In this Alternate History reality has been designed to achieve clear goals that include delivering and/or helping achieve a higher level(s) of human success, satisfaction, wealth, quality of life, and/or other positive benefits as normal network services—just as you can plug any electrical appliance in a standard wall outlet and receive power, the Alternate Reality Expandaverse was developed as a new type of “utility” so plugging in provides success, global digital presence and much more—altering the lives of individuals, groups, corporations and businesses, governments and nations, and civilizations.

Cosmology 6 (left column of FIG. 1): Cosmology is the first of this Alternate Reality's views of human history: First is “Earth as the center of the universe” 10. For most of human history 14 15 16 17 the Earth was believed to be the center of a small universe 10 whose limits were immediate and physically experienced—what the human eye could see in the night sky, and where a person could travel before possibly falling off the edge of the earth. Second is “The Universe” 11. Starting with the rebirth of science during the Renaissance 18 and continuing thereafter 19, the Universe 11 was a scientifically proven physical entity whose boundaries have been repeatedly pushed back by new discoveries—initially by making the Earth just one of the planets that revolve around the sun, then discovering that the sun is just one of the stars in a large number of galaxies, then “mapping” the distribution of galaxies and projecting it backwards to the Big Bang when the Universe came into existence. Today scientists are continuing to expand this knowledge by pursuing theories of multiple dimensions and strings, and by using new tools such as the Large Hadron Collider (LHC). Third is the “Expandaverse” 12. The Alternate Reality's cosmology diverges from the current reality's cosmology starting with discontinuities 20 that occur during the recent digital era. This Alternate History Stage 21 posits a Cosmology transition from the Universe 11 to the Expandaverse 12 (as described elsewhere).

Stages of History 7 (center column of FIG. 1): A second of this Alternate Reality's views of human history is the Stages of History 7 which are described as discontinuous stages because the magnitude of each change required new forms of consciousness and awareness to come into existence. Some examples of this are common throughout history starting with agricultural stability replacing nomadic hunting and gathering; with money and markets replacing bartering physical goods; with city states, rulers and laws replacing tribal leaders; right up to telephone calls replacing written letters. Each substantial change requires a change in consciousness of what we do, how we do that, and in some cases who and what we are, our relationships with those around us, and our expectations for our lives and futures. A somewhat more detailed example with its own stages is the invention of money which changed value from individual physical items to abstract values represented by “prices” rather than utility—and over time changed pricing from bargained prices to fixed prices—with each of these changes requiring people to learn new ways to think, feel and re-conceptualize the ways we acquire most of the things in their lives, until today we buy most of what we need at fixed prices.

This view of history (as discontinuous stages that include discontinuities in people's consciousness) fits the Expandaverse 12 stage 21 because the Expandaverse includes new forms of awareness and consciousness. In addition, the “S-curve” is used to represent each stage of history 14 15 16 18 19 21 because the S-curve describes how new technologies are spread, how they mature, and then how they are eclipsed and disrupted by newer technologies. In brief, innovations have a life cycle with a startup phase during which they are developed and (optionally) improved; they then spread from the innovator to other individuals and groups (sometimes rapidly and sometimes slowly) as others realize the value of each new invention; this diffusion and growth stage may increase in speed and scope if (optional) improvements are made in the technology; the process typically slows after the diffusions and improvements have been exhausted and a mature technology is in place; mature technologies are often ripe for replacement by new innovations that must start at the bottom of their own S-curve. While FIG. 1 illustrates this as major stages of history 14 15 16 18 19 21, in reality there are countless smaller technologies, stages, innovations, and advances that have it each climbed their own S-curves, only to be replaced and eclipsed by newer innovations—or declines, as illustrated by the Dark Ages 17.

In the center column's stages of history 7, these discontinuous stages in both history and consciousness are illustrated as: Agriculture 14 which roughly includes domesticated animals, fire, stone tools and early tools, shelter, weapons, shamans, early medicine and other innovations from the same period of history. City states 15 which roughly includes rulers, laws, writing, money, marketplaces, metals, blacksmithed tools and weapons, and other innovations from the same period of history. Empires 16 which roughly includes larger civilizations formed in Europe, the Middle East and North Africa, Asia, and central and south America—as well as the numerous innovations and institutions required to create, govern, run and sustained each of these empires/civilizations. The Dark Ages 17 is noted to illustrate how humanity, civilization and our individual consciousness can be diminished as well as increased, and that there may be a correlation between the absence of freedom and the (e)quality of our lives. The Renaissance 18 roughly includes a rebirth of independent thinking with the simultaneous developments of science (such as astronomy, navigation, etc.), art, publishing, commerce (trade, the rise of guilds and skills, the emergence of the middle classes, etc.), the emergence of nation states, etc. The Industrial Revolution 19 produced too many innovations and changes in consciousness to list, with a few notable examples including going from the first flight in 1903 to the first walk on the moon in 1969 (less than 70 years), transportation (from trains to automobiles, trucks, national highway systems, and worldwide international jet flights), mass migrations for work (first to the cities and then to the suburbs and then to airports for routine inter-city job travel), electronic communications (from the telegraph to the telephone, cell phone, e-mail, and the Internet), manufacturing (from factories to assembly lines to mass customization of products and services), mass merchandising of disposable products and services (from “wear it out” to “throw it out”), and much more.

Expandaverse 21: The Alternate Reality's Expandaverse stage of history diverges from the current reality's history starting with “AnthroTectonic Discontinuities” 20 that began during the recent digital era. This Alternate History posits a historic stage transition from the Industrial Revolution 19 to an Alternate Realities 21 Stage. In the Expandaverse individuals may have multiple identities, and each identity may live in one or a plurality of Shared Planetary Life Spaces (SPLS). Each SPLS may be its own alternate reality that is determined and managed by controlling its boundaries, with specific means for doing this described in the Alternate Reality Machine (ARM) herein. Each identity may switch between one or a plurality of SPLS's (alternate realities) by logging in and out of them. The Expandaverse's initial core technologies include those described herein, including in some examples: TPU (Teleportal Utility) 21, ARM (Alternate Realities Machine) 21, Multiple identities/Life Expansion 21, SPLS (Shared Planetary Life Spaces) 21, TP SSN (Teleportal Shared Spaces Network) 21, Governances 21, AKM (Active Knowledge Machine) 21, TP Devices 21 (LTPs, MTPs, RTPs, AIDs/AODs, VTPs, RCTPs, Subsidiary Devices), Directory(ies) 21, Auto-identification of identities 21, optionally including auto-classifying and auto-valuing identities, Reporting 21, optionally including recommendations, guidance, “best choices”, etc., Optimizations 21, Etc.

Wealth System 8 (a right column of FIG. 1): The third of this Alternate Reality's views of human history is the dominant system for producing wealth 8 which is also viewed as discontinuous stages because each Wealth System also requires new forms of awareness and consciousness to come into existence. These are illustrated in a right column of FIG. 1, titled Wealth System 8 and include: The oldest and longest is Agriculture 22. Agriculture was the dominant economic focus for most stages of human history 14 15 16 17 18—a long period in which food was scarce, average life spans were short, disease was common, the vast majority of people were involved in agriculture, and wealth was rare. Under Agriculture 22 humanity's standard of living stayed nearly the same—“poor” by today's standards—for literally thousands of years. When the “human herd” was thinned by war, natural disasters, plagues, etc. food became abundant, people were better off and the “herd” grew until scarcity and poverty returned. Thomas Hobbes was considered accurate when he described the “Natural Condition of Mankind” in Leviathan (1651) as “solitary, poor, nasty, brutish, and short.” With the recent rise of Industry 23, “Capitalism” within a stable and regulated governmental system may be defined and practiced in many ways, but there is no question that where this has been practiced successfully for decades or centuries it has produced the largest increases in wealth ever seen in human history. As a system of wealth production, nothing has ever exceeded the combination of private ownership of the means of production, a stable legal system that attempts to reduce corruption, prices set by market forces of supply and demand rather than economic planning, earnings set by market forces rather than economic planning or high tax rates, and profits distributed to owners and investors without excessive taxation. In short, when there is a good set of “rules” that provides the freedom to take independent personal and economic actions—and profit from them—the evidence from history shows that large numbers of people have a better chance to become prosperous and even rich than under any other economic or governmental system yet practiced.

A new Wealth System started emerging in this Alternate History from the ARTPM, Teleportal Presences & Knowledge 24. The “discovery” of the Expandaverse, a new digital world, opened new economic opportunities and exploitation, which is what happened when a “new world” was discovered in the past (such as Columbus's discovery of the physical New World). First and most important, this new Wealth System 24 did not change Capitalism 19 23 as it operated under the Industry Wealth System 23. In fact, it multiplied it and strengthened capitalism and its support for acquiring personal wealth by ever larger numbers of people through their independent self-chosen multiple identities and multiplied actions. In an alternate history example, imagine what millions more college graduates could do if added to the economy—so adding multiple identities allowed many college graduates to add new identities and the economy to rapidly obtain large numbers of economically experienced college graduates. In some ARTPM examples if you have multiple identities (with some public identities, some private identities, and some secret identities) each of your identities can live in separate alternate realities, earn separate incomes, own separate assets, and take advantage of different ways to produce wealth—expanding your range of economic choices so you have multiple ways to become wealthy, consume more, enjoy more in your life, and do much more with your multiple earnings—so that one middle class life may receive the equivalent of several middle class incomes and combine them to enjoy an upper class outcome. Rather than achieving life extension (because the goal of living for hundreds of years or longer will not be achieved during our lifetime), the Expandaverse provides life expansion into multiple simultaneous identities and alternate realities. Within these potentially expanded multiple incomes and combined consumption there is also a stronger dynamic alignment between people's goals, needs, desires and what is provided to them—described herein as “AnthroTectonics”—which operates within free market capitalism. This, as a Wealth System, may increase the volumes of economic creation and consumption by instantly multiplying the number of educated and successful people who may operate successfully, with global presence and delivered knowledge, throughout multiple modern economies—in brief, each expensive college degree may now be put to more uses by more identities, and on a larger worldwide scale. The Alternate Reality's Wealth System 24 diverges from the current reality's Industry 23 Wealth System with discontinuities 20 that occur during the recent digital era. This Alternate History thus posits a Wealth System 8 transition from the Industrial Wealth System 23 to Teleportal Presences & Knowledge 24 that is described elsewhere.

Culture System 9 (far right column of FIG. 1): The fourth of this Alternate Reality's views of human history is the dominant system for human culture 9 which is also part of this discontinuous stages because each Culture System also requires new forms of awareness and consciousness to come into existence. These differing sources of culture are illustrated in a right column of FIG. 1, titled Culture System 9 and are based on the communications technologies available in each system: The oldest, most direct and most physical is Local Cultures 25, which were based on the immediate lives that people experienced in extended families, tribes, city states, early empires, etc. Even though “Local Cultures” spans a wide range of governances from tribes to empires, the common element is what people experience directly and personally from their local environment (even if it is controlled by dominant dictators from a distance as in an empire such as Rome or China). A new Culture System started with the gradual rise of Mass Communications 26, starting slowly with the invention of the printing press in the 1400's, but gained increasing scope and media during the industrial revolution of the 1800's, and exploded into a global culture after the advent of electricity, radio, television, photography, movies, the telephone and other media in the 1900's—to culminate in an Internet era of global brands, mass-desired affluence and minute-by-minute twitter-blogger-24×7 global news and culture bombardment in the early 2000's.

A new Culture System 27 emerged in this Alternate History after it was recognized that digital technologies give both individuals and groups new means to control reality. The “discovery” of the Expandaverse, a new digital world, opened new social opportunities to enjoy from multiple identities, setting boundaries on each SPLS, etc.; which is what happened when a new cultural trend was discovered in the past (such as printing, telephone communications, the automobile, flying, etc.). Specifically, the ARTPM included an Alternate Realities Machine (herein ARM) which enabled multiple Self-Selected Cultures to emerge as an alternative to the Mass Communicated Culture that had previously dominated reality. In the Expandaverse's Self-Selected Cultures each person could have a plurality of identities (as described elsewhere) wherein each identity could have one or a plurality of Shared Planetary Life Spaces (SPLS). Each SPLS is essentially “always on” so that identities (“I” which includes identities, people and groups), places (“P”), tools (“T”) and resources (“R”)—herein IPTR—in it are everywhere and connected at all times. Each SPLS also has multiple boundaries that can be controlled, so each identity can include what it wants and keep out what it doesn't want. If I have a plurality of identities, and each of my identities can also have a plurality of Shared Lives Connections, and each of my identities may be everywhere that is connected at any time that I choose, and I can include and exclude what I want from each Planetary Life Space, then there is no shortage of choices; rather, I have many more choices than today BUT they are my choices and the parts of the mass culture that I don't want no longer imposes itself on me.

In a brief alternate history summary of the Self-Selected Culture enabled by this Alternate Realities Machine (ARM), it gives each person multiple human realities, and makes each of them a conscious choice: We can choose to create multiple identities to enjoy multiple lives simultaneously, and each identity can have one or a plurality of Shared Planetary Life Spaces, and each SPLS can copy or create different boundaries (e.g., its settings of what to include and exclude), and more. In some examples we can include everything in the current reality such as its total carpet bombing of branded media messaging; in some examples we can prioritize it and make sure what we like is included such as our interests like our family, close relatives and friends and our shared interests; in some examples we can limit it and make sure what we dislike is excluded such as entertainment that is too sexual or too violent for our children; in some examples we may optionally choose to be paid to include media sources that want our attention and need it for their financial prosperity like advertisers willing to pay us to see their messages. Additionally, when one person has a plurality of identities, and when each identity has a plurality of SPLS's, and when each SPLS has different interests and boundaries, that one person may enjoy multiple different human realities that each have worldwide “always on presence.” In addition, analyses and reports on the outcome metrics from different “ARM reality settings” and their results may identify those that produce the greatest successes (how ever each person prefers to use available metrics to define that)—so that each identity can specify their goals, see the size of the gap(s) between themselves and those who reach them “best,” and rapidly adopt the “best” reality settings from what is generally most or more successful. Because ARM settings results are widely and personally reported as gaps to reach one's goals, the “best realities” may be widely seen and copied—perhaps providing new means to raise income, success, satisfaction and happiness by trying and evolving self-selected human reality(ies) at a new pace and trajectory to determine and help people determine what works best for varied peoples and groups. With additional success guidance from this alternate reality's Active Knowledge Machine (herein AKM), these self-chosen realities may also be applied more successfully.

Who doesn't walk down the street and dream about what should be improved, what should be better, what we would really like if we could choose and switch into a more desirable new reality just because we want it? In the alternate timeline, a new Self-Selected Culture emerged because new types of choices became possible: New means enabled specifying a plurality of goals, seeing the alternate realities whose metrics showed how well they achieved them, copying successful ARM settings let people try new realities and test them personally, a collection of alternate realities that work better could be kept, and then each person could shift at will between their most successful realities by logging in and out as different identities. As people learned about this new Self-Selected Culture they modified each of their chosen realities by changing its SPLS boundary settings, and kept what worked best to achieve their various and different personal goals, then in turn distributed the “best alternate realities” for others to use to enjoy better and happier lives. Instead of one external ordinary public culture that attempts to control and shape everyone commercially, with the ARTPM's Alternate Realities Machine the alternate timeline gained multiple digital realities and individual control of each of them to enjoy the more successful and happier realities in which we would like to live.

FIG. 2 is a magnification of the “AnthroTectonic” digital discontinuities 20 in FIG. 1 between the current reality's timeline and the Expandaverse's timeline. In FIG. 2, “AnthroTectonics Discontinuities: Simultaneous and Cyclical Transformations,” three simultaneous and cyclical discontinuities are illustrated 30 31 including Technological Discontinuities 32 36, Organizational Discontinuities 33 37, Cultural Discontinuities 34 38, and their resulting new opportunities 35 and new technologies 35 that produce newer discontinuities 32 33 34 with successive cycles of transformations. In the Alternate Reality timeline the first is Technological Discontinuities 32 that expand in size and scope. Some examples from the current reality are digital content types that are now created and distributed worldwide by individuals or small independent collaborations as well as by organizations such as words, pictures, music, news, magazines, books, movies, videos, tweets, real-time feeds, and other content types—digital technologies made each of these faster and easier for a worldwide multiplication of sources to create, edit, find, use, copy, transmit, distribute, multiply, combine, adapt, remix, redistribute, etc. These discontinuities started in the 1950's and are ongoing and continuously expanding 36, and their total volume of views from new content sources may surpass the content products from large media corporations with notable examples such as the newspaper industry.

In the Alternate Reality timeline Technological Discontinuities 32 caused Organizational Discontinuities 33 that in turn alter organizations as many people, organizations, corporations, governments, etc. received numerous benefits from transforming themselves digitally. In some examples from the current reality, organizations have transformed themselves into digital communicators and digital content users (which includes entire industries, governments, nonprofit organizations, etc.) that increasingly utilize digital networks, content and data in many forms, and as a result organizations have adapted their employees' skills, human resources, locations, functions (such as IT), teams, business divisions, R&D processes, product designs, organizational structures, management styles, marketing and much more. These are currently taking place and are ongoing into the foreseeable future 37.

In the Alternate Reality timeline the combination of Technological Discontinuities 32 and Organizational Discontinuities 33 cause the emergence of Cultural Discontinuities 34 that also expand in size and scope. Continuing the examples from the current reality—digital content—the culture in content industries like music, movies, publishing, cable television, etc. are shifting radically as their customers, audiences, products, services, revenues, distribution, marketing channels and much more are altered by the current reality's transformation of them into digital industries.

This is cyclical 35. Each of these—Technological Discontinuities 32, Organizational Discontinuities 33 and Cultural Discontinuities 34—provides both new opportunities 35 and ideas for new technologies 35 that may in turn create new advances that are also discontinuities 32 33 34. AnthroTectonics 40 is the result, which may be described by the geologic metaphor of a new mountain range: It is as if a giant flat continent existed but as the “geologic digital plates” collide between new technologies 32 36, new organizational adaptations 33 37 and cultural shifts 34 38 individual mountains rise up until there is an entire digital mountain range pushed high above the starting level—with new mountains continuing to emerge 35 40 from the pressure of that new mountain range 32 33 34.

These discontinuities 14 15 16 18 19 20 21 in FIG. 1 produce a new wealth system 8 24, new economic growth, new income: A better metaphor is adapting “the goose that laid a golden egg.” While some newly laid golden eggs are cashed in 32 33 34, other eggs are hatched and grown into geese that lay more golden eggs 35 32 33 34, with those new geese 32 33 34 35 producing both more gold and more geese that lay more golden eggs 32 33 34 35 until wealth becomes abundant rather than scarce. This is a new kind of wealth system 8 24 in which the more we take from it, and the more we drive it, the more wealth there is—the traditional economist's ideas about scarcity have been made obsolete in the new AnthroTectonic Alternate Realities 12 21 24 27. Consider two sets of examples, the first of which is historic from the current reality: In Germany about 400,000 years ago the golden eggs of human hunting were laid with first known spears; in Asia about 50,000 years ago marked the earliest known start of the golden eggs of ovens and bows and arrows; in the Fertile Crescent about 10,000 years ago the golden eggs of farming and pottery were laid; in Mesopotamia about 5,000 years ago the golden eggs of cities and metal were laid; in India about 2,000 years ago the golden eggs of textiles and the zero were laid; in China about 1000 years ago the golden eggs of printing and porcelain were laid; in Italy about 500 years ago the remarkably diverse Renaissance laid entire flocks of geese who themselves laid many new types of golden eggs of science, crafts, printing and the spread of knowledge; in England about 200 years ago the similarly diverse Industrial Revolution laid many more flocks of geese with golden eggs like steam engines, spinning jennys, factories, trains and much more; recently within the last few decades, an entire flock of digital geese laid the Internet's golden eggs and the many industries and new generations of golden eggs that have come from it.

In the current reality's history humanity created these numerous “geese” that “laid these golden eggs”—none of them existed until humans created them: Traditional economists thought of them as scarcities but in the Alternate Reality Timeline these were thought of in the opposite way because they expanded humanity's wealth and abundance. These golden eggs have familiar industry names like transportation, communications, agriculture, food, manufacturing, real estate, construction, energy, retailing, utilities, information technology, hospitality, financial services, professional services, education, healthcare, government, etc. But in the Alternate Reality Timeline when something new is created it is as if a golden egg were hatched and a new gosling is born to lay many more golden eggs 32 33 34 35. Transportation is one example of a flock of geese who lay “golden eggs” like ships, cars, trucks, trains and planes. Retail is another and its flock lays golden eggs like malls, furniture stores, electronics stores, restaurants, gas stations, automobile and truck dealers, building materials stores, grocery stores, clothing stores, etc. When geese mate they produce more offspring that lay more golden eggs such as when transportation mates with retail it produces “golden eggs” like warehousing, distribution, storage, shipping, logistics, supply chains, pipelines, air freight, seaports, courier services, etc. When the Alternate Reality Timeline uses global digital presence it accelerates economic growth by stimulating the production of many more golden eggs at ever faster rates—the take-up of helpful new ideas and products, at a worldwide scale, is the normal way people live with an ARTPM.

The AnthroTectonic component of the ARTPM's alternate reality harnesses this “golden eggs” model to drive new economic growth, prosperity and abundance by making this a set of simultaneous and parallel discontinuities 32 36 33 37 34 38 35 40. It consciously uses these to leap out of the economic scarcity model into a future of consciously stimulated advances and expanding abundance. For an example of how this works, in the current reality ownership and property expand into a major source of middle-class wealth and assets with the centuries-long development of real estate property ownership and mass construction industry, such as the mass marketing of houses in large suburban developments—which converted farmland into individually owned assets that appreciate in price. There is a visible connection between expanding the types of assets coupled with widespread ownership—when a new type of “golden egg” creates new types of properties in an existing or new industry, those new properties add to the available assets and the wealth of people and corporations. In the Alternate Reality Timeline new types of property are easy to create because Intellectual Property is real and the ARTPM follows that reality's established IP laws and rules (as described elsewhere outside of this document).

An example illustrates this from the ARTPM itself, and its alternate reality timeline: In some examples audiences for broadcast media may add boundaries and paywalls so they are paid for their attention, rather than providing it for free—so your attention becomes your property, what you choose to perceive becomes your property, and your conscious has new digital self-controls—your consciousness is your asset that you can control and monetize to produce more income. Similarly, in some examples the ARTPM lets individuals establish multiple identities, where each new identity may be a potential source of additional incomes so that each person may multiply their incomes and increase their wealth. Similarly, in some examples the ARTPM provides means for multiple “governances” (separate from and different from governments) where each governance may provide new activities that can scale up to meet various personal and social needs—which in turn expands the economic activities and contributions from governances. Similarly, in some examples the ARTPM's Teleportal Utility (herein TPU) provides consistent means to add multiple new types of devices and services, some of which may include Local Teleportals (LTPs), Mobile Teleportals (MTPs), Remote Teleportals (RTPs), Virtual Teleportals (VTPs), Remote Control Teleportals (RCTPs), and other new types of devices that may each add rapidly advancing presence and communication features and capabilities beyond existing devices. Similarly, in some examples the ARTPM's Active Knowledge Machine (herein AKM) provides dynamic knowledge with systems to deliver what we each need to know, when and where we need to know it—an infrastructure that delivers a growing range of human successes over the network rather than requiring each of us to achieve personal success independently and on our own. Similarly, in some examples many other types of property, capabilities and advances are provided by this discontinuous AnthroTectonic process 32 36 33 37 34 38 35 40, which together constitute the digital discontinuities 20 in FIG. 1 and wealth system 24 and culture system 27 of the Expandaverse 12.

In the Alternate Reality timeline AnthroTectonic Discontinuities are larger and often “reversals” of the assumptions that are common and widely accepted in our current reality. In the Alternate Reality Timeline's History some of the transformed organizations and transformed people realized that the new digital environment would become a cultural divergence that transforms everything. They consciously choose to help this divergence evolve for “economic growth” so that it would increase personal incomes, raise living standards and create more wealth faster; and for “the greater good” so that it would help large numbers of people choose and reach their personal goals by both personal means (such as multiple identities and/or boundaries) and collective means (such as governances). This helped those who promoted this, too, because those who led these divergences profited enormously from driving these AnthroTectonic Discontinuities. They placed themselves in worldwide leadership positions—they gained corporate and personal dominance at the center of a new and more successful worldwide civilization.

An example is corporate training: In the current reality corporate training started with staff who wrote processes as procedural manuals, and taught those in classrooms on a fixed schedule. With the Internet this evolved into webinars and distance learning that trains remotely located employees who no longer need to travel to a central facility. Today consistent corporate training can reach many employees in less time, and even be managed and delivered globally. In the Alternate Reality Timeline a growing range of knowledge is made dynamic and is delivered by the network based on each person's real-time actions and activities, so they receive the knowledge they need when and where they need it. A source of success is the network, with two-way interactions making learning and succeeding a normal part of doing and being—which is described in the ARTPM's Active Knowledge Machine (herein AKM).

How large are the Alternate Timeline's AnthroTectonic Discontinuities? To provide a new stage where human success is delivered as a normal process, and where the world is connected in new ways, the Expandaverse reverses or transforms many of the current reality's fundamental assumptions and concepts simultaneously 38:

Reality 39: FROM reality controls people TO we each control our own realities.

Boundaries 39: FROM invisible and unconscious TO explicit, visible and managed.

Death 39: FROM one life TO life expansion through multiple identities.

Presence 39: FROM where you are TO everywhere in multiple presences (as individual or multiple identities).

Connectedness 39: FROM separation between people TO always on connections worldwide.

Contacts 39: FROM trying to phone, conference or contact a remote recipient TO always present in a digital Shared Space(s) from your current preferred Device(s) in Use.

Success 39: FROM you figure it out TO success is delivered by the network.

Privacy 39: FROM private TO tracked, aggregated and visible (especially “best choices”).

Ownership of Your Attention 39: FROM you give it away free TO you can charge for it if you want.

Ownership of Devices and Content 39: FROM each person buys these TO simplified access and sharing of commodity resources.

Trust 39: FROM stranger danger TO most people are good when instantly identified and classified.

Networks 39: FROM transmission TO identifying, tracking and surfacing behavior.

Network Communications 39: FROM electronic (web, e-store, email, mobile phone calls, e-shopping/e-catalogs, tweets, social media postings, etc.) TO personal and face-to-face, even if non-local. Knowledge 39: FROM static knowledge that must be found and figured out TO active knowledge that finds you and fits your need to know.

Rapidly Advancing Devices 39: FROM you're on your own TO two-way assistance.

Buying 39: FROM selling by push (marketing and sales) and pull (demand) TO interactive during use, based on your immediate actions, needs and goals.

Culture 39: FROM one common culture with top-down messages TO we choose our cultures and we set their boundaries (paywalls, priorities [what's in], filters [what's out], protection, etc.).

Governances 39: FROM one set of broad politician-controlled governments TO choosing your life's purposes and then choosing one or a plurality of multiple governances that help you achieve your life's goals.

Personal Limits 39: FROM we are only what we are TO we can choose large goals and receive two-way support, with multiple new ways to try and have it all (both individually and collectively).

In the Alternate Reality's History both reversals and transformations turned out to be central to humanity's success because the information that was surfaced, the ways people became connected, and a plurality of simultaneous transformations enabled a plurality of people and groups to connect, learn, adopt “what's best”, and succeed in varied ways at a scale and speed that would have been impossible if the Alternate Reality's former timeline (our current reality) had continued.

As illustrated in FIG. 3, “Teleportal Machine (TPM) Summary” this provides some examples that provide new capabilities for a Teleportal Machine 50 to deliver new devices, networks, services, alternate realities, etc. In some examples a Teleportal Utility (TPU) 64 includes providing new capabilities for the simultaneous delivery of new networks in some examples a Teleportal Network 52 (see below); in some examples a Teleportal Shared Space Network 55 (see below), in some examples a Teleportal Broadcast & Applications Network 53 (see below), in some examples Remote Control 61 of a plurality of devices and resources like LTPs 61, RTPs 61, PCs 61, mobile phones 61, television set-top boxes 61, devices 61, etc.; in some examples a range of other types of Teleportal Networks 58, in some examples Teleportal Social Network(s) 59, in some examples News Network(s) 59, in some examples Sports Network(s) 59, in some examples Travel Network(s) 59, and in some examples other types of Teleportal Networks 59; in some examples running a Web browser 59 61 that provides access to the Web, Web applications, Web content, Web services, Web sites, etc. as well as to the Teleportal Utility and any of its Teleportal Networks, services, features, applications or capabilities. In some examples it may also provide Virtual Teleportal capabilities 60 for downloading widgets or applications that attach or run a Virtual Teleportal to online devices 61 in some examples mobile phones, personal computers, netbooks, laptops, tablets, pads, television set-top boxes, online video games, web pages, websites, etc. In some examples a Virtual Teleportal may be accessed by means of a Web browser 61 which may be used to add Teleportaling to any online device (in some examples a mobile phone by means of its web browser and data service, even if a vendor artificially “locks out” or blocks that mobile phone from running a Virtual Teleportal). In some examples Teleportals may be used to access entertainment 62, in some examples traditional entertainment products 63 and in some examples multiplayer online games 63, which in some examples have some real world components 63 (as described elsewhere) and in some examples exist only in a game world 63. Further in some examples, by means of the AKM (Active Knowledge Machine) said TPU provides interactions with numerous types of devices 57, which are detailed in the AKM and its components.

Unlike the wide range of different and often complex user interfaces that prevent some customers from using some types, models, basic features, basic functions, or new versions of various devices, applications and systems—and too often prevents them from using a plurality of advanced features of said diversity of devices, applications and systems; said Teleportal Utility 64 52 53 58, Teleportal Shared Space(s) 55 56, Virtual Teleportals 60, Remote Control Teleportaling 60, Entertainment 62, RealWorld Entertainment 62, and AKM interactions 57 share an Adaptable Common User Interface 51 (see the Teleportal Utility below). The conceptual basis of said interface is “teleporting”, that is, the normal and natural steps one would take if it were possible to step directly through a Teleportal into a remote location and interact directly with the actual devices, people, situations, applications, services, objects, etc. that are present on the remote side. Because said Teleportal's “fourth screens” can add a usable interface 51 across a wide range of interactions 64 52 53 55 57 58 60 62 that today require customers to figure out difficulties in interfaces on the many types and models of products, services, applications, etc. that run on today's “three screens” of PC's, mobile phones and navigable TVs on cable and satellite networks, said Teleportal Utility's Adaptable Common User Interface 51 could make it easier for customers to use said one shared Teleportal interface to reach higher rates of success and satisfaction when doing a plurality of tasks, and accomplishing a plurality of goals than may be possible when required to try to figure out a myriad of different interfaces on the comparable blizzard of technology-based products, services, applications and systems in the current reality.

As a result of said broad applicability of the Teleportal's “fourth screen” to today's “three screens”, said Teleportal components 50 51 64 52 53 55 57 58 60 62 may provide substitutes and/or additions to current devices, networks and services that constitute innovations in their functionality, ease of use, integration of multiple separate products into one device or system, etc.:

Substitutes: Some Teleportal Devices, Networks and Platform (see below) may optionally be developed as products and services that are intended to provide substitutes for existing products and services (such as run on today's “three screens”) when users need only the services and functionality that Teleportaling provides, in some examples:

PCs as accessible commodities (online) 60: In some examples PC's may be used from Teleportals by means of Remote Control 60 instead of running the PC's themselves. In some examples the purchase of one or a plurality of PCs might be replaced by network-based computing whereby the user runs Web PC's and PC applications online by means of physical and/or virtual Teleportals 60. In some examples said PC's may be run online by means of remote control when using a Teleportal(s) 60. This is true for the potential replacement of home PC's 60, laptops 60, netbooks 60, tablets 60, pads 60, etc. In some examples these devices may be replaced by utilizing unused RCTP controllable devices online 60 from other Teleportal users at some times of the day or evening. In some examples these devices may be unused overnight so might be provided as accessible online resources 60 for those in parts of the world where it is morning or afternoon, and similarly devices in any part of the world might be made available overnight and provided online 60 to others when they are not being used. In some examples individuals and companies have unused PCs or laptops with previously purchased applications software that are not the latest generation and are currently not in use, so these might be provided full-time online 60 to those who need to use a PC as a commodity resource. In some examples these devices may be provided for a charge 60 and provide their owners income in return for making them available online. In some examples these devices might be provided free online 60 to a charity who provides access to PC's worldwide such as to school children in developing countries, to charities that can't afford to buy enough PC's, etc.

Some mobile phone and landline calling services 55: In some examples one or a plurality of mobile and landline telephone services might be replaced by Teleportal Shared Space(s) 55, whether from a fixed location by means of a Local Teleportal (LTP) 52, from mobile locations by means of a Mobile Teleportal (MTP) 52, by means of Alternate Input Devices (AIDs) 55/Alternate Output Devices (AODs) 52 60, etc.

Mobile phone or landline telephone services: There are obvious substitutions such as substituting for telephone communications 55. In some examples some phone applications like texting 53 may be run on a TP Device 52, by means of a Virtual Teleportal 60, in some examples texting 53 may be run on a Web browser in a mobile phone 61, in some examples texting 53 may be run when a Web browser 61 in turn runs a Virtual Teleportal 60 that provides said services substitution), run by online TP applications 53, etc. In some examples location-based services such as navigation and local search may be replaced on Teleportals 53 (again with TP-specific differences). In some examples telephone services in some examples telephone directories, voice mail/messaging, etc. may have Teleportal parallels 53 (though with TP-specific differences).

Cable television 53 60 and satellite television 53 60 on Teleportals instead of on Televisions: In some examples cable television set-top boxes, or satellite television set-top boxes (herein both cable and satellite sources are referred to as “set-top boxes”), may be used from Teleportals by means of Remote Control 60 instead of running the output signal from the set-top boxes on Television sets. In some examples the purchase of one or a plurality of cable and/or satellite television subscriptions might be replaced by network-based viewing whereby the user runs set-top boxes online by means of physical and/or Virtual Teleportals 60. In some examples said set-top boxes may be run and used online by means of remote control when using a Teleportal(s) remotely 60. This is true for the potential replacement of home televisions 60, cable television subscriptions 60, satellite television subscriptions 60, etc. In some examples these set-top box devices may be replaced by utilizing unused devices online 60 from other Teleportal users at various times of the day or night. In some examples these set-top boxes may be unused during late overnight hours so might be provided as accessible online resources 60 for those in parts of the world where it is a good time to watch television, and similarly set-top boxes in any part of the world might be made available during overnight hours and provided online 60 to others when they are not being used—which may help globalize television viewing. In some examples individuals and companies have set-top boxes with two or more tuners where an available tuner might be run remotely to record a television show(s) for later retrieval or playback. In some examples television may be accessed and displayed by means of IPTV 53 (which is television that is Internet-based and IP-based). In some examples a teleportal may view television shows, videos or multimedia that is available on demand and/or broadcast over the Internet by means of a Web browser 61 or a web application 61.

Services, applications and systems: Some widely used online services might be provided by Teleportals. Some examples include PC-based and mobile phone-based services like Web browsing and Web-based email, social networks access, online games, accessing live events, news (which may include news of specific categories and formats such as general, business, sports, technology, etc. news, in formats such as text, video, interviews, “tweets,” live observation, recorded observations, etc.), location-based services, web search, local search, online education, visiting entertainments, alerts, etc.—along with advertising and marketing that accompanies any of these. These and other services, applications and systems may be accessed by means such as an application(s), a Web browser that runs on physical Teleportals, runs on other devices by means of Virtual Teleportals, runs on other remote Teleportals by means of Remote Control Teleportaling, etc.

New innovations: Entirely new classes of devices, services, systems, machines, etc. might be accessed by means of a Teleportal(s) or innovative new features on Teleportals, such as 3D displays, e-paper, and other innovative uses described herein.

Additions to Subsidiary Devices: Alternatively, vendors of PCs, mobile phones, cable television, satellite television, landline phone services, broadband Internet services, etc., may utilize ARTPM technology(ies) (it's IP [Intellectual Property]) and Utility(ies) to add Teleportal features and capabilities to their devices, networks and/or network services—whether as part of their basic subscription plan(s), or for an additional charge by adding it as another premium, separately priced service(s).

The current reality is physical and local and it is well-known to everyone. As depicted in FIG. 4, “Physical Reality (Prior Art),” the Earth 70 is the normal and usual physical reality for all human beings. When you walk out on a public city street 71 you are present there and can see everything that is present on the street with you—all the people, sidewalks, buildings, stores, cars, streetlights, security cameras, etc. Similarly, all the people and cameras present on that street at that time can see you. Direct visual and auditory contact does not have any separation between people—everyone can see each other, talk to each other, hear what any person says if they are close enough to them, etc. Physical reality is the same when you go to the airport to get on a plane 75 to fly to an ocean beach resort 73. When you arrive at the airport and are present in it you can see everyone and everything there, and everyone who is at the airport and in the same space as you can see you. Physical reality stays the same after you go through the airport's security checkpoint and are in the more secure area of your plane's boarding gate—again, in the place you are present you can see and hear everyone and everything, and everyone and everything can see and hear you. Physical reality stays the same on the plane during the flight 75, when you arrive at your vacation beach resort 73, and when you walk on the beach. When you walk through the resort, go down to the beach and stand gazing over the ocean at the sunset 73 everyone who is present in the same physical reality as you can see you and talk to you. No matter where you travel on the Earth 70 by walking, driving a car or flying in a plane physical reality stays the same. The state of things as they actually exist is when you go into any public place anywhere, at any time, you can see everyone and everything that is there, and if you are close enough to a person you can also hear that person—and in every public place you are present everyone who is there can see you, and anyone who is close enough to you can also hear you.

Physical reality is the same in private spaces such as when you use a security badge to enter your employer's private company offices in the city 71. Once you enter your company's private offices everyone who is in the same space as you can see you regardless of whether you are in a receptionist's entry area, a conference room, a hallway, a cubicle, an R&D lab, etc.—and in each of these private spaces you can see everyone who is in each place with you. If you want to enter anyone's even more private space you can simply walk to their open door or cubicle entry and knock and ask if they have a minute, or if you see the person in a hallway you can simply stop and talk to him or her.

Physical reality stays the same in your most private spaces such as when you drive home to your house such as a home in the suburbs 72. If anyone is at home such as your family, and you are in the same room with any of them you can see and hear them and they can see and hear you. In this most private of spaces you can see and be with everyone who is in your house but not with you simply by walking down the hall and going into the room they are in.

Some issues about physical reality are helpful. We have long had the implicit assumption that using a telephone, video conference, video call, etc. involves first identifying a particular person or group and then contacting that person or group by means such as dialing a phone number, entering a list of email addresses, entering a web address, etc. Though not expressed a digital contact was person-to-person (or group to group in a video conference), and it was different than being simultaneously present in Physical Reality—you need to contact someone to make a digital connection. Until you make a selection and a contact you cannot see and hear everyone and everyone cannot see or hear you.

Another issue is from fields such as science, ethics, morality, politics, philosophy, etc. This is also an implicit assumption that underlies many fields of human activity—given what we know about the way the world is, we know this is not an ideal world and it has room for improvements, so what should those improvements be? It doesn't matter whether our recognition of this implicit assumption comes from the fields of science, ethics, morality, politics, philosophy, sociology, psychology, simply talking to someone else, or many other areas of society or life. As we stand anywhere on the Earth and look about us at our physical reality, including all the people, places, tools, resources, etc. we can see from the many things people have done there is a widely practiced implicit assumption that we can make this a better place—whether we are improving it for ourselves, for other people, for the things around us, or for the environment in which everything lives.

This recitation starts with its “feet on the ground” of physical reality and moves immediately to the two issues just raised: First, why doesn't digital reality work the same as physical reality? Suppose an Alternate Reality made digital reality work the same as physical reality—you see everywhere, every one, and are present with everything connected. In the ARTPM's digital reality you have an immediate, open, always on connection with the available people, places, tools, resources, etc. Even more interesting as a transformation, everyone and everything (including accessible tools and resources) can see you, too. The ARTPM calls this a Shared Planetary Life Space (SPLS), and just as in physical reality there are both public SPLS's in which everyone is present, and private SPLS's where you define the boundaries—and you can even have secret SPLS's where the boundaries are even more confidential. Just as when you walk out on a public physical street and see everything and everything sees you, when you enter a PUBLIC Shared Planetary Life Space you have an immediate open connection with everyone and everything that is available in that public digital SPLS. And just as when you walk into a private physical place such as your home or a company's private offices, when you enter a PRIVATE Shared Planetary Life Space you have an immediate private connection with everyone and everything that is a member of that private SPLS.

While it is a substantial change to make digital reality parallel physical reality, the real question is the second issue, that the world as it is not ideal and has room for improvements, so what should those improvements be? This Alternate Reality's answer is the ARTPM. Digital reality is designed by people so people can make it into what they want and need. As a starting point, can that be more meaningful and valuable then what has become known as virtual reality, digital communications, augmented reality, and various applications and digital communications achieved with telephone land lines, PCs, mobile phones, television set-top boxes, digital entertainment, etc.

This Alternate Reality has a digital reality that in some examples has the explicit goal of helping us become better in multiple ways we want and choose. In addition to Shared Planetary Life Spaces it includes self-improvement processes so a normal part of digital presence is receiving Active Knowledge about how to succeed, which may include seeing its current state, knowing the “best choice(s)” available, and being able to switch directly and successfully to what's best—to make your life better and more successful sooner. Your digital presence includes immediate opportunities to do more, want more, and have more.

The cultural evolution of this Alternate Reality has a divergent trajectory: “If you want a better reality, choose it.”

As an addition to our Physical Reality (prior art), this recitation introduces the Expandaverse and it's technologies and components—a new design for an Alternate Reality, collectively known as the Alternate Reality Teleportal Machine.

Turning now to FIG. 5, “Alternate Reality (Expandaverse),” this recitation includes a TP Shared Spaces Network (herein TP SSN), multiple identities 80 81, an Alternate Realities Machine (herein ARM) with Shared Planetary Life Spaces 83 84, boundaries management to control those SPLS's, and ARTPM components that relate generally to providing means for individuals, groups and the public to fundamentally redefine our common human reality as multiple human identities, multiple realities (via ARM management of the boundaries of Shared Planetary Life Spaces, or SPLS), and more—so that our chosen digital realities are a better reflection of our needs and desires. In addition, this includes accessible constructed digital realities and participatory digital events that may be utilized by various means described herein such as streamed from RTPs (Remote Teleportals); digital presence at events such as by PlanetCentrals, GoPorts, alert systems, third-party services; and other means that relate generally to providing means for enjoying, utilizing, participating, etc. various types of constructed digital realities as described herein.

In our current reality physical presence is more important and digital contacts are secondary. The ARTPM diverges from our current reality which is physical, and where our primary presence is in a common current reality—the ARTPM provides means for one or a plurality of users to reverse the current physical presence-first priority so that an SPLS provides closer “always on” connections to both people (such as individuals or identities) and parts of the world (such as unaltered or digitally constructed) that are most interesting and important to us, regardless of their locations or whether they are people, places, tools, resources, digital constructs, etc.—it is a multi-dimensional Alternate Reality from what local physical reality has been throughout human evolution and history.

In some examples the ARTPM embodies larger goals: A human life is too short—we die after too few decades. Many would like to live for centuries but this is medically out of reach for those alive today. Instead, the ARTPM provides means to extend life within our current life spans by enabling people to enjoy living multiple lives 80 81 82 at one time, thereby expanding our “life time” in parallel 82 rather than longitudinally. In brief, we can each live the equivalent of more lives 80 81 within our limited years 82 85 in more “places” 88 by having multiple identities 81, even if we are not able to increase the number of years we are alive.

In some examples another larger goal is the success and happiness of each of our identities 80 81 82. Each identity 81 may create, buy, control, manage, participate in, enjoy, experience, etc. one or a plurality of Shared Planetary Life Spaces 83 84 85 in which they may have other incomes, activities or enjoyments; and each of their identities 80 81 may also utilize ARTPM components in some examples the Active Knowledge Machine (herein AKM), reporting of current “best choices,” etc. to know more about what they need to do to have more successful lives in the emerging digital environments 85 88. Thus, one person's multiple identities may each become better at learning, growing, interacting, earning, enjoying more varied entertainments, being more satisfied, becoming more successful, etc.—as well as better connected with the people, places, tools and resources that are most important to them. In addition to the SPLS's 83 84 85 and the constructed digital realities 86 87 88 and participatory digital events 86 87 88 that are controlled and/or enjoyed by each identity 80 81 82, a person's identities 80 81 may be present in other SPLS's 83 84 85 and/or in constructed digital realities 86 87 88 and/or in participatory digital events 86 87 88 that may each be public (such as a Directory(ies), rock concert, South Pacific beach, San Francisco bar, etc.), or private (such as an extended family, a company where a person works, a religious institution such as a local church or temple, a private meeting, an invitation-only performance, a privately shared experience, etc.).

Therefore, in some examples it is an object of the Alternate Realities Machine to introduce a new digital paradigm for human reality whereby each person may control their identities 80 81 82, their SPLS reality(ies) 83 84 85, and their digitally realities 86 87 88 and presence at participatory digital events 86 87 88 by utilizing one or a plurality of means provided by the ARTPM—means that diverge from our current historical reality by controlling our identities 80 81 82, controlling our realities 83 84 85 86 87 88, and ultimately may give us control over reality. In a brief summary, this new digital paradigm may be simple: “If you want a better reality, choose it.”

Turning now to FIG. 6, “Teleportal Machine (TPM) Alternate Realities Summary: Alternate Realities Machine (ARM),” some components of the ARM, which is a component of the ARTPM, is illustrated at a high level. Said illustration begins with the Current Reality 100 in which the Earth 102 provides Physical Reality 102 for one person at a time 103. As our current mass communications culture and Digital Era emerged one characteristic of the Current Reality 100 is large and growing volumes of public culture 105, commercial advertising 105, media 105, and messaging 105 that floods each person 104 103 and competes for each person's attention, brand awareness, desires, emotional attachments, beliefs, actions, etc. Our expanding waistlines—the worldwide “growth” of obesity—is perhaps the most visible evidence of the success of the common culture in capturing the “mind share” of large numbers of people. In sum, many facets of the ordinary culture 105 and its imposed advertising 105, messages 105, and media 105 attempts to dominate a large and growing part of each person's 104 103 attention, desires and activities.

In a brief summation of some examples, the Alternate Realities Machine (ARM) 101 enables departure from the current common reality 100 by providing multiple and flexible means for people and groups to filter, exclude and protect themselves from what is not wanted, while including what is wanted, and also protecting themselves both digitally and physically. Additionally, the ARM provides means (optional TP Paywalls) so that individuals and groups may choose to earn money by permitting entry by chosen advertisers and/or people which are willing to pay for attention and “mind share.” In a brief and familiar parallel, people typically use a television DVR (Digital Video Recorder) to skip advertisements and record/watch only the shows and news they want, along with some “live” television that they would like to see. Similarly, the ARM provides what in some examples could be called an “automated digital remote control” (its means are control over each SPLS's boundaries) so each separate SPLS reality excludes what we don't want and includes what we like, plus it may include optional paywalls and protections, so we no longer need to blindly accept everything the ordinary current reality attempts to impose on us. In fact, by using the ARM in some examples we can selectively filter the common mass culture to make it more like the individually supportive, positive, safe and successful culture that some might like it to be.

The ARM's means for this, at a high level and in some examples, includes each person 103 establishing one or a plurality of identities 106 (each of which may be a public identity, a private identity, or a secret identity). In turn, each identity 107 may have one or a plurality of Shared Planetary Life Spaces 111. In some examples, one identity 107 may have separate or combined SPLS's for various personal roles, activities, etc., with separate or combined SPLS's for personal interests such as a career 108 with professional associations, a particular job 108, a profession 108 with professional relationships, other multiple incomes 108, family 108, extended family 108, friends 108, hobbies 108, sports 108, recreation 108, travel 108, fun 108 (which may also be done by separate public, private, and/or secret identities), a second home 108, a private lifestyle 108, etc.

Each SPLS defines its “reality” by controlling boundaries 110 and in some examples ARM Boundaries Management 110 111 112 113 114 115 116 117 is employed, which has a plurality of example boundaries 110 to illustrate the use of boundaries to limit, prioritize and provide various functions and features for separate and different realities. In some examples these SPLS boundaries include priorities 110 to include and highlight what is wanted, filters 110 to exclude what is not wanted, (optional) paywalls 110 to require and receive payment for providing one's attention to certain elements of the common culture, and/or protections 110 which may be used to provide both digital and physical protection (as well as to protect various devices from theft).

In some examples these boundaries define a range of types of SPLS's, some of which are included in a high-level visualization 111 that starts at the broadest public reality 112 and moves to the most private, personal and non-public reality 117. Starting broadly, the current public reality remains 112 with no ARM 101, no identities 106 107, and no SPLS's 108 110. Within that, ARM Boundaries Management 110 provides multiple levels of controls and multiple types of SPLS's 113 114 115 116 117, which in some examples include: Public SPLS's 113 which are various manifestations of the ordinary public culture and provide only limited filters or protections, in some examples a state's citizens 113, in some examples a vendor's customers 113, in some examples a social network's members 113, etc. The next level is Groups' SPLS's 114 which in some examples may include the groups to which that person is a member 114, in some examples each of those groups' SPLS's, and filters or paywalls they have applied to their SPLS's; in some examples a company where one works 114, in some examples a governance that an identity has joined 114, in some examples a church or temple where one is a member 114, etc.; these group SPLS's would include the boundaries each group decides it wants, which in some examples would be more restrictive and confidential for many corporations 114, more values-based or behavior-based for religious institutions 114, etc. The next levels are personal SPLS's 115 116 117 and these include in some examples one's public personal SPLS's 116 in some examples one's private and/or secret SPLS's 117 (if any), as well as any paywall(s) 115 that one might add; these would use whatever combination of filtering 110, priorities 110, paywall(s) 110, and protections 100 each identity would like, with some identities employing more intense, different, or varied boundaries than others.

In some examples broad learning of “what's best” 121 122 with rapid distribution 121 122 and adoption of that 123 may be employed to help people achieve increasing success 123 over time 124. This would shift control over today's current singular reality to individual choices of multiple new and evolving trajectories. The pace of this would be affected by these new realities' capabilities for delivering what people would like 121 122 123 124, as it would be affected by the excessive level and poor quality of messaging from the ordinary public culture 105 104, as it would be affected by people's desires to create and live in their desired alternate realities 106 107 108 110—so this is likely to match what the people in each historical moment want and need 123, as well as evolving over time 124 to reflect their expanding or diminishing desires. This “Expandaverse” growth in human realities is based on another component of the ARM (Alternate Realities Machine) which is (are) Directory(ies) 120 that include public, group, private and other Directories 120. These may be “mined” 121 and analyzed 121 for various metrics and data 120 that may include users 120, identities 120, profiles 120, results 120, status data 120, SPLS's 120, presence 120, places 120, tools 120, resources 120, face recognition data 120, other biometric data 120, authorizations or authentications data 120, etc. Since SPLS metrics may be tracked and reported 121 (such as what is most successful, effective, satisfying, etc.) in some examples it is possible to choose one's goals 122 and look up these analyses 121, or perform them as needed 121, to determine “what's best” and the characteristics, choices, settings, etc. used to achieve that. Because it is possible to save, access, copy, install, and try those choices, ARM identity settings 106 107, SPLS configurations 108 110 115 116 117, etc. in some examples this enables rapid learning, setup and use of the most effective or popular ways to apply identities for various types of goals, including their boundaries settings such as priorities 110, filters 110, paywalls 110, protections 110, etc.

An important distinction is the potential scale and volume of manageable alternate realities that may be enabled by the ARM 101. In some examples this may be far more than a simple division of the one current reality into a few variations—because each person 103 104 may have one or a plurality of identities 106 107 (which may be changed over time); and because each identity may have one or a plurality of SPLS's 108 110 111 112 113 114 115 116 117 (which may be changed over time); and because each identity may be public, private or secret. It is entirely conceivable that an identity may be created to control one SPLS's boundaries so that this “reality” includes only one other person, a place or two, a couple of communications tools and financial resources, and everything else excluded—a digital world created for one's true love so two people could find happiness and, while together, make their way in the larger world as a unique and special couple. With the ability to find 121 122, copy 122 and re-use 122 settings any types of identities, lifestyles or personal goals that can be expressed 106 107 108 1120 111 113 114 115 116 117 may become popular and copied widely 122, enabling both personal 115 116 117 and cultural 112 113 114 growth in multiple trajectories 124 that are unimaginable today.

Before describing the ARTPM's Teleportal Devices, FIG. 7 illustrates the current reality's numerous different digital devices that have separate operating systems, interfaces and networks; different means of use for communications and other tasks; different content types that sometimes overlap with each other (with different interfaces and means for accessing the same content); etc.

Essential underlying issues among the current reality's digital devices have parallels to the history of the book. Between about 1435 and 1444 Johann Gutenberg devoted himself to a range of inventions that related to the process of printing with movable type, and he opened the first printing establishment in 1455. In 1457 the first printed book with a printer's imprint was published (the famous Mainz Psalter). Printing spread by training apprentices and others who learned the trade, then went on to move to new cities and open their own printing shops. By 1489 there were 110 printing shops across Europe and by 1500 more than 200. At that time only about 200,000 Europeans could read so books were not the main part of a printer's business, which included posters, broadsheets, pamphlets, and varied shorter works than full books.

Early books were not standardized and took many different layouts and forms, many of them expensive to produce and buy. Most early books simply attempted to imitate the appearance of hand lettered manuscripts and many printers would cut a new typeface to imitate a manuscript when it was copied, even if the letter forms were fairly illegible. Basic elements of “the book” had to be developed and then adopted as standards. An example is a title page that listed a definite title for the book, the author's name, and the printer's name and address. Even simple devices like page numbers, reasonable margins, and a contents page that refers to page numbers rather than sections of the text were both innovations and gradually emerging standards. The content of that century's books were often based on verbal discourse and storytelling—the culture of most people (even those who could read) was oral or semi-oral—so at the level of the text printers were required to regularize spelling, standardize punctuation, separate long blocks of text into paragraphs, etc. Gradually innovations were also made in making text more accessible and readable such as by breaking up the text into units so it was easier to read and return to a section or passage. Together, these innovations and emerging standards made books easier and faster to read which expanded the ways that books could be used, as well as helping spread literacy to more people.

It took about 80 years—until about 1530—before these innovations became widely enough adopted that it could be said that the “book” was developed and standardized. Today, a “traditional” book has many of the elements that took most of the book's first century. This initial century yielded the following “typical book”: A book begins with a jacket with endpapers glued to it and the body of the bound book glued to the endpapers (though with a paperback the jacket and endpapers are the same wrap-around cover, with the bound book glued to it). The bound content normally follows a predictable sequence, with the right (or recto) side considered dominant and the left (or verso) side subordinate. The front matter (traditionally called “preliminaries”) includes one or more blank pages, a series or “bastard” title on a new right page, a frontispiece on the left, the title page on the right, on the left behind the title page, dedication on the right, a Foreword that begins on the right, a Preface that begins on the right, Acknowledgments that begin on the right, Contents that begin on the right, an Illustrations List that begins on the right or the left, an Introduction that begins on the right. The body of a traditional book's text is equally structured and begins with a part title on the right (if the book is divided into major parts or sections), the opening of each chapter begins in the middle of a right page with the chapter title or chapter number above it (chapter numbers were traditionally Roman Numerals if a small number of chapters, or Arabic numerals if a larger number of chapters), and if illustrated a book may include a separate section for illustrations or plates (which began on a right page). The traditional book's “back matter” includes an Appendix that begins on the right, Notes that begins on the right, a Bibliography that begins on the right, Illustration Credits that begins on the right, a Glossary that begins on the right, an Index that begins on the right, a Colophon that begins on the right or the left, and one or more blank pages.

It was worth spending most of a century developing this “standardized” or “typical” book. This traditional book form communicates more than importance and distinction. It is visible proof that every word of a book is written, edited, designed and printed with care, credibility, authority and taste. For all who are literate the book's layout and design are predictable, easy-to-use, easy to store and care for, and easy to return to any needed parts or passages whenever wanted. These innovations and advances are part of why books are widely credited with playing key roles in the development of the Renaissance, Science, the Reformation, Navigation, Europe's exploration of the world, and much more. During the 1500's more than 200,000 book titles have been recorded, and with an estimated 1,000 copies per title, that is more than 200 million books printed. During the first half of the 1600's that number is estimated to have tripled—so the spread of this new standard book “device” was increasingly part of Europe's wider economic, scientific and cultural progress.

Today, the emergence of our digital environment, with numerous overlapping devices, has parallels to the first century of the book. As depicted in FIG. 7, today's digital era is young and our many digital devices 125 are non-standard, not predictable to use, and do not have a common interface structure that can be employed easily for their range of features, and returned to easily after a period of non-use with easy pick-up where one left off. Yet today's digital devices 126 127 128 129 130 increasingly provide access to similar or overlapping digital media and content, and they also do many of the same things with digital content and interactions—they find, open, display, use, edit, save, look up, contact, attach, transmit, distribute, etc. FIG. 7 lists some examples of these “current devices” 125 which includes: Mobile phones 126, landline telephones 126, VOIP phone lines 126, wearable computing devices 126, cameras built into mobile devices 126 127, PCs 127, laptops 127, stationary internet appliances 127, netbooks 127, tablets 127, e-pads 127, mobile internet appliances 127, online game systems 127, internet-enabled televisions 128, television set-top boxes 128, DVR's (digital video recorders) 128, digital cameras 129, surveillance cameras 129, sensors 144 (of many types; in some examples biometric sensors, in some examples personal health monitors, in some examples presence detectors, etc.), web applications 130, websites 130, web services 130, web content 130, etc.

Therefore, there was a recognition of today's parallels to the first century of the book in the “history” of the Alternate Reality. They factored the parallel functionality and content of the many siloed digital devices 125 and the Alternate Reality evolved a digital devices environment (the ARTPM) that is summarized in FIG. 8. To facilitate this transition the Alternate Reality included the (optional) capability to use a plurality of current devices 125 as Subsidiary Devices to the TPM 140 in FIG. 8, essentially turning them into commodity input/output devices within the TPM's digital environment—but with a common and predictable TP interface that could be used widely and consistently to establish access and remote control, essentially raising the productivity of using a plurality of existing digital devices.

After years of building and using the Internet and other networks (such as private, corporate, government, mobile phone, cable TV, satellite, service-provider, etc.), the capabilities for presence to solve both individual and/or collective problems are still in their infancy. This TPM transforms the local glass window to provide means for a substantial leap to Shared Planetary Life Spaces that could be provided over various networks. FIG. 8 provides a high-level illustration of the Teleportal Machine's (TPM's) devices and networks described in FIG. 3, namely Teleportal Devices 52 57, Teleportal Utility 64 and Teleportal Network 64. Turning to FIG. 8 this Teleportal Machine provides a combination of improvements that include multiple components and devices. Taken together, these provide families of devices 132 133 134 135, networks 131, servers 131, systems 131 139, infrastructure utility services 131 139, connections to alternative input/output devices 134, devices that include a plurality of types of products and services 135, and utility infrastructure 139—together comprising a Teleportal Machine (TPM) for looking and listening at a new scale and speed that are explicitly designed to provide the potential to transform human presence, communications, productivity, understanding and a plurality of means for delivering human success.

Local Teleportal (LTP) 132: In some examples (“Local Teleportal” or LTP) this provides the means to transform the local glass window so that instead of merely looking through a wall at the place immediately outside, this “window” 132 becomes able to “be present” in Shared Planetary Life Spaces (which include people, places, tools, resources, etc.) around the planet. Optionally, this “window's” remote presence may behave as if it were a local window because (1) the viewpoint displayed changes automatically to reflect the viewer's position relative to the remote scene (without needing to send commands to the Remote Teleportal's camera(s) by means of a Superior Viewer Sensor (SVS) and related processing in a Local Processing Module), and (2) audio sounds from the remote location may be heard “through” this “window” as if the viewer was present at the remote location and was viewing it through a local window. In addition, alternate video and audio input and output devices may optionally be used with or separately from a Local Teleportal. An In some examples this includes a video camera/microphone 132, along with processing in the LTP's Processing Module 132 and transmission via the LTP's Communications Module 132 to use Teleportal Shared Space(s), and/or to provide personal narration or other local video to make Teleportal broadcasts or augment Teleportal applications. Optionally, alternative access to LTP video and audio, or direct Remote Control or a Virtual Teleportal, may be provided by other means in some examples a mobile phone with a graphical screen 134, a television connected to a cable or satellite network 134, a laptop or PC connected to the Internet or other network 134, and/or other means as described herein.

Mobile Teleportal (MTP) 132: In some examples (“Mobile Teleportal” or MTP) this provides the means to transform a local digital tablet or pad so that instead of merely looking at a display screen this “device” 132 becomes able to “be present” in Shared Planetary Life Spaces (which include people, places, tools, resources, etc.) around the planet. Optionally, this “device's” remote presence may behave as if it were a local window because (1) the viewpoint displayed may be set to change automatically to reflect the viewer's position relative to the remote scene (without needing to send commands to the Remote Teleportal's camera(s) by means of a Superior Viewer Sensor (SVS) and related processing in the MTP's Processing Module), and (2) audio sounds from the remote location may be heard “through” this device as if the viewer was present at the remote location and was viewing it through a local window. In addition, alternate video and audio input and output devices may optionally be used with or separately from a Mobile Teleportal. In some examples this includes a video camera/microphone 132, along with processing in the MTP's Processing Module 132 and transmission via the MTP's Communications Module 132 to use Teleportal Shared Space(s), and/or to provide personal narration or other local video to make Teleportal broadcasts or augment Teleportal applications. Optionally, alternative access to MTP video and audio, or direct Remote Control or a Virtual Teleportal, may be provided by other means in some examples a mobile phone with a graphical screen 134, a television connected to a cable or satellite network 134, a laptop or PC connected to the Internet or other network 134, and/or other means as described herein.

Remote Teleportal (RTP) 133: A “Remote Teleportal” (or RTP) provides one means for inputting a plurality of video and audio sources 133 to Shared Planetary Life Spaces by means of RTPs that are fixed or mobile; stationery or portable; wired or wireless; programmed or remotely controlled; and powered by the electric grid, batteries or other power sources. In addition, optional processing and storage by an RTP Processing Module 133 may be used with or separately from a Remote Teleportal (in some examples for running video applications, for storing video and audio; for dynamic video alterations of the content of a real-time or near-real-time video stream, etc.), along with transmission of real-time and/or stored video and audio by an RTP's Communications Module 133. Optionally, alternative remote input to or output from this Teleportal Utility 131 139 may be provided by other means in some examples an AID/AOD 134 (in some examples an Alternative Input/Output Device such as a mobile phone with a video camera 134) or other means.

Alternate Input Devices (AIDs) 134/Alternate Output Devices (AODs) 134: In some examples these include devices that may be utilized to provide inputs and/or outputs to/from the TPM, such as mobile phones, computing devices, communications devices, tablets, pads, communications-enabled televisions, TV set-top boxes, communications-enabled DVRs, electronic games, etc. including both stationary and portable devices. While these are not a Teleportal they may run a Virtual Teleportal (VTP) or a web browser that emulates a LTP and/or a MTP. Depending on the device's capabilities and connectivity, they may also be able to use the VTP or browser emulation to operate the device as if it were an LTP, a MTP or an RTP—including some or many of a TP Device's functions and features.

Devices 135: In some examples the TPM includes an Active Knowledge Machine (AKM) which transforms a plurality of types of products, equipment, services, applications, information, entertainment, etc. into “AKM Devices” (hereinafter “Devices”) that may be served by one or more AKMs (Active Knowledge Machines). In some examples Devices and/or users make an AK request from the AKM by means of trigger events in the use of devices, or by a user making a request. The request is received, parsed, the appropriate Active Knowledge Instructions (AKI) and/or Active Knowledge and/or marketing or advertising is determined, then retrieved from Active Knowledge Resources (AKR). The AKM determines the receiving device, formats the AKI and AK content for that device, then sends it to said receiving device. The AKM determines the result by receiving an (optional) response; if not successful the AKM may repeat the process or the result received may indicate success; in either case, it logs the event in AK results (raw data). Through optimizations the AKM may utilize said AK results to improve the AKR, AKI and AK content, AK message format, etc. The AKI and AK delivered may include additional content such as advertisements, links to additional AK (such as “best choice” for that type of device, reports or dashboards on a user's or group's performance), etc. Reporting is by means of standard or custom dashboards, standard or custom reports, etc., and said reporting may be provided to individual users, sponsors (such as advertisers), device vendors, AKM systems that employ AK results data, other external applications that employ AK results data, etc.

Teleportal Network (TPN) 131: In some examples a “Teleportal Network” (or TPN) provides communications means to connect Teleportal Devices in some examples LTPs 132, MTPs 132, RTPs 133, AIDs/AODs 134 by means of various devices and systems that are in a separate patent application. The transport network may include in some examples the public Internet 131, a private corporate WAN 131, a private network or service for subscribers only 131, or other types of communications. In addition, optional network devices and utility systems 131 may be used with or separately from a Teleportal Network, in some examples to provide secure communications by means such as authentication, authorization and encryption, dynamic video editing such as for altering the content of real-time or stored video streams, or commercial services by means such as subscription, membership, billing, payment, search, advertising, etc.

Teleportal Utility (TPU) 131 139: In some examples a “Teleportal Utility” (or “TPU”) provides the combination of both new and existing devices and systems that, taken together, provide a new type of utility that integrates new and existing devices, systems, methods, processes, etc. to look, listen and communicate bi-directionally both in real-time Shared Planetary Life Spaces that include live and recorded video and audio, and in some examples including places, tools, resources, etc. This TPU 131 139 is related to the integration of multiple devices, networks, systems, sensors and services that are described in some other examples herein together with this TPU. This TPU provides means for (1) in some examples viewing of, and/or listening to, one or a plurality of remote locations in real-time and/or recordings from them, (2) in some examples remote viewing and streaming (and/or recording) of video and audio from one or a plurality of remote locations, (3) in some examples network servers and services that enable a local viewer(s) to watch one or a plurality of remote locations both in real-time and recorded, (4) in some examples configurations that enable visible two-way Shared Space(s) between two or multiple Local Teleportals, (5) in some examples construction of non-edited or edited video and audio streams from multiple sources for broadcast or re-broadcast, (6) in some examples providing interactive remote use of applications, tools and/or resources running locally and/or running remotely and provided locally for interactive use(s), (7) in some examples (optional) sensors that determine viewer(s) positions and movement relative to the scene displayed, and respond by shifting the local display of a remote scene appropriately, along with other features and capabilities as described herein, (8) etc. The transport network may include in some examples the public Internet 131, a private corporate WAN 131, a private network or service for subscribers only 131, or other types of communications or networks. In addition, optional network devices 131 and utility systems 139 may be used with or separately from a Teleportal Network 131, in some examples to provide secure communications by means such as authentication, authorization and encryption; dynamic video editing such as altering the content of real-time or stored video streams; commercial services by means such as subscription, membership, billing, payment, search, advertising; etc.

Additions to existing Devices, Services, Systems, Networks, etc.: In addition, vendors of mobile phones 141, landline telephones 141, VOIP phone lines 141, wearable computing devices 141, cameras built into mobile devices 141 142, PCs 142, laptops 142, stationary internet appliances 142, netbooks 142, tablets 142, pads 142, mobile internet appliances 142, online game systems 142, internet-enabled televisions 143, television set-top boxes 143, DVR's (digital video recorders) 143, digital cameras 144, surveillance cameras 144, sensors 144 (of many types; in some examples biometric sensors, in some examples personal health monitors, in some examples presence detectors, etc.), web applications 145, websites 145, web services 145, etc. may utilize Teleportal technology to add Teleportal features and capabilities to their mobile phones 141, landline telephones 141, VOIP phone lines 141, wearable computing devices 141, cameras built into mobile devices 141 142, PCs 142, laptops 142, netbooks 142, tablets 142, pads 142, online game systems 142, television set-top boxes 143, DVR's (digital video recorders) 143, cameras 144, surveillance cameras 144, sensors 144, web applications 145, websites 145—whether as part of their basic subscription plan(s), or for an additional charge by adding it as another premium, separately priced upgrade, feature or service.

Subsidiary Devices 140: By means of Virtual Teleportals (VTP) 60 in FIG. 3 and Remote Control Teleportaling (RCTP) 60, some examples of various current devices depicted in FIG. 7 may be utilized as (commodity) Subsidiary Devices 140 in FIG. 8. In some examples this integration constitutes innovations in their functionality, ease of use, integration of multiple separate devices into one ARTPM system, etc. In some examples this provides only limited functionality and services that Teleportaling provides. In some examples:

Use Remote Control Teleportaling (RCTP) to run PC's 142, laptops 142, netbooks 142, tablets 142, pads 142, game systems 142, etc.: In some examples a plurality of PCs may be used by Remote Control from LTPs, MTPs and RTPs, or from AIDs/AODs that are running a RCTP (Remote Control Teleportal). This turns those PC's into commodity-level resources that may be accessed from the various TP Devices. In some examples PC's can be provided throughout a Shared Planetary Life Space to all of its participants from any of its participants who choose to put any of their appropriately configured PC's online for anyone in the SPLS to use. In some examples PC's can be provided openly online for charities and nonprofit organizations to use, so they have the computing they need without needing to buy as many PC's. In some examples PC's can be provided for a specific SPLS group(s) such as students in developing countries, schools in developing countries, etc. In some examples PC's can be provided for specific services such as to add face recognition to a camera that doesn't have sufficient computing or storage, to add “my property” authentication and theft alerts to devices that don't have sufficient computing or storage, etc. In some examples PC's can be rented to provide computers and/or computing for specific purposes. In some examples PCs can be used for specific purposes such as face recognition to spot and track celebrities in public, then send alerts on their locations and activities, so those who follow each celebrity can observe them as they move from location to location. In some examples other devices (such as laptops 142, netbooks 142, tablets 142, pads 142, games 142, etc.) may be capable of being controlled remotely, in which case they may be turned into commodity Subsidiary Devices that are run in various combinations from TP Devices and the TPM. Whether these devices can be controlled remotely depends on the functions and capabilities of each device; and even when this is possible only a subset of RCTP capabilities and/or features may be available.

Use a Virtual Teleportal (VTP) to run Teleportals on PC's 142, laptops 142, netbooks 142, tablets 142, pads 142, games 142, etc.: In some examples functionality may be added to various digital devices by running a Virtual Teleportal, which provides them the functionality of a Teleportal without needing to buy a TP Device 132 133. This turns them into an AID/AOD 134. Whether a VTP can run on each of these devices depends on the functions and capabilities of each device; even when it can run only a subset of VTP capabilities and/or features may be available.

Use an LTP 132, MTP 132, or AID/AOD 134 to replace mobile phone and/or landline phone calling services: In some examples a plurality of phone lines and/or phone services might be replaced by Teleportal Shared Space(s), whether from a fixed location by means of a Local Teleportal 132 or from mobile locations by means of a Mobile Teleportal 132, and/or from fixed or mobile locations by means of an AID/AOD 134. In some examples only basic phone calling services and phone lines may be replaced by TP Devices 132 134. In some examples more phone services and phone lines may be replaced 132 134, such as voice mail, text messaging, photographs, video recording, photo and video distribution, etc.

Use Remote Control Teleportaling (RCTP) to run mobile phones 141, wearable computers 141, cameras built into mobile devices 141 142, etc.: In some examples a plurality of mobile devices may be used by Remote Control from LTPs, MTPs and RTPs, or from AIDs/AODs that are running a RCTP (Remote Control Teleportal). This turns those mobile devices into commodity-level resources that may be accessed from the various TP Devices. Whether a mobile device can be controlled remotely depends on the functions and capabilities of each device; and even when this is possible only a subset of RCTP capabilities and/or features may be available.

Use a Virtual Teleportal (VTP) to run Teleportals (where technically possible) on mobile phones 141, landline telephones 141, VOIP phone lines 141, wearable computing devices 141, cameras built into mobile devices 141 142, PCs 142, laptops 142, netbooks 142, tablets 142, pads 142, online game systems 142, television set-top boxes 143, DVR's (digital video recorders) 143, cameras 144, surveillance cameras 144, sensors 144, web applications 145, websites 145, etc. In some examples functionality may be added to various digital devices by running a Virtual Teleportal, which provides the technically possible subset of functionality of a Teleportal without needing to buy a TP Device 132 133. This turns them into an AID/AOD 134. Whether a VTP can run on each of these devices depends on the functions and capabilities of each device; even when it can run only a subset of VTP capabilities some TP features may be available.

Telephone: Mobile/Landline/VOIP (Voice over IP over the Internet): This includes the mobile phone vendors and landline RBOCs (Regional Bell Operating Companies) such as BellSouth, Qwest, AT&T and Verizon. It also includes VOIP vendors such as Vonage and Comcast (whose Digital Voice product has made this company the fourth largest residential phone service provider in the United States). In some examples TP Devices may replace landlines or mobile phone lines, or VOIP lines for telephone calling services. In some examples any type of compatible device or service can be attached to the phone network and this may include TP Devices 132 133 134 135 140. In some examples various phone services may be provided or substituted by TP Devices 132 133 134 such as texting, telephone directories, voice mail/messaging, etc. (though with TP-specific differences). Even location-based services such as navigation and local search may be replaced on Teleportals (again with TP-specific differences).

Cable television/Satellite television/Broadcast television/IPTV (Internet-based TV over IP)/Videos/Movies/Multimedia shows: Teleportal Devices 132 133 134 135 140 might provide access to television from a variety of sources. In some examples TP Devices 132 133 134 140 may substitute for cable television, satellite television, broadcast television, and/or IPTV. In some examples TP Devices 132 133 134 140 may run local TV set-top boxes and display their television signals locally, or transmit their television signals and display them in one or a plurality of remote locations. In some examples TP Devices 132 133 134 140 may run remote TV set-top boxes and display their television signals locally, or rebroadcast those remotely received television signals and display them in one or a plurality of remote locations. In some examples Teleportals 132 134 140 may be used to be present at events located in any location where TP Presence may be established. In some examples Teleportals 132 134 140 may be used to view television shows, videos and/or other multimedia that is available on demand and/or broadcast over a network. In some examples Teleportals 132 134 140 may be used to be present at events located in any location where TP Presence may be established, those events may be recorded and re-broadcast either live or by broadcasting said recording at a later date(s) and/or time(s). In some examples Teleportals 132 133 134 140 may be used to acquire and copy television shows, videos and/or other multimedia for rebroadcast over a private Teleportal Broadcast Network.

Substitute for Subsidiary Devices via Remote Control Teleportaling (RCTP): By means of RCTP it may be possible to substitute TP Devices 132 133 134 140 (including Subsidiary Devices) for a range of other electronics devices so that not everyone needs to own and run as many of these as today. Some of the electronic devices that may be substituted for by means of TP Devices may include mobile phones 141, landline telephones 141, VOIP phone lines 141, wearable computing devices 141, cameras built into mobile devices 141 142, PCs 142, laptops 142, netbooks 142, tablets 142, pads 142, online game systems 142, television set-top boxes 143, DVR's (digital video recorders) 143, cameras 144, surveillance cameras 144, sensors 144, web applications 145, websites 145, etc. Whether RCTP can run on each of these devices depends on the functions and capabilities of each device; even when it can run only a subset of RCTP capabilities some TP features may be available.

Services, applications and systems: Some widely used online services might be provided by Teleportal Devices 132 133 134 140. In some examples PC-based and mobile phone-based services like Web browsing and Web-based email, social networks, online games, accessing live events, news (which may include news of various types and formats such as general, business, sports, technology, etc. news, in formats such as text, video, interviews, “tweets,” live observation, recorded observations, etc.), online education, reading, visiting entertainments, alerts, location-based services, location-aware services, etc. These and other services, applications and systems may be accessed Teleportal Devices 132 133 134 140 by means such as an application(s), a Web browser that runs on physical Teleportals, runs on other devices by means of a VTP (Virtual Teleportal), runs on other devices by means of RCTP (Remote Control Teleportaling), etc. Whether a VTP or an RCTP can run on each of these devices and provide each type of substitution depends on the functions and capabilities of each device; even when it can run only a subset of RCTP capabilities some TP features may be available.

New innovations that may be accessed as Subsidiary Devices: Entirely new classes of electronics devices 140, services 140, systems 140, machines 140, etc. might be accessed by means of Teleportal Devices 132 133 134 135 140 if said electronics can run a VTP (Virtual Teleportal) or be controlled by means of an RCTP (Remote Control Teleportaling). Whether VTP and/or RCTP can run on each of these devices depends on the functions and capabilities of each device; even when it can run only a subset of VTP and/or RCTP capabilities some TP features may be available.

Unlike the huge variety of complicated user interfaces on many types of devices 125 126 127 128 129 130 in FIG. 7 that make it difficult for users to fully employ some types, models or new versions of devices, applications and systems—and too often prevent them from using a plurality of advanced features of said diverse devices, applications and systems; said Teleportal Machine, summarized in FIG. 8, provides an Adaptable Common User Interface 51 in FIG. 3 across its set of TP Devices (LTP 132, MTP 132, RTP 133, AID/AOD 134, and AKM Devices 135) and TP Utility 139 functions that include Teleportal Shared Space(s) 55 56 in FIG. 3, Virtual Teleportals 60 61, Remote Control Teleportals 60 61, Teleportal Broadcast Networks 53 54, Teleportal Applications Networks 53 54, Other Teleportal Networks 58 59, Entertainment and RealWorld Entertainment 62 63. Because said Teleportal's “fourth screens” can add a usable interface 212 across a wide range of interactions 52 53 55 57 58 60 62 that today require customers to figure out difficulties in interfaces on the many types and models of products, services, applications, etc. that run on today's “three screens” of PC's, mobile phones and navigable TVs on cable and satellite networks 125 126 127 128 129 130 in FIG. 7, said Teleportal Utility's Common User Interface 51 could make it easier for customers to use said one shared Teleportal interface to succeed in doing a plurality of tasks, and accomplish a plurality of goals that might not be possible when required to try to figure out a myriad of different interfaces on the comparable blizzard of technology-based products, services, applications and systems.

FIG. 9, “Stack View of Connections and Interface,” illustrates the manageability and consistency of the TP Devices environment illustrated and discussed in FIG. 8. A pictorial illustration of this FIG. 9 view will be discussed in FIG. 10, “Summary of TPM Connections and Interactions.” The Teleportal Utility's (TPU's) Adaptable Consistent Interface and user experience is illustrated and discussed in FIGS. 183 through 187 and elsewhere. To begin, the stack view in FIG. 9 summarizes the types of connections and interfaces in the TPM Devices Environment 136 137 138 139 in FIG. 8. From this view there are five main types of connections 180 and just one TPU Interface 183 across these five types of connections. With FIG. 8's focused view of five connection types and one TPU Interface it can be seen that all parts of the ARTPM, including Subsidiary Devices, can be run in a manageable way by almost any user throughout the ARTPM digital environment. This architecture of five main types of connections 180 and one TPU Interface 183 is consciously designed as a radical Alternate Reality simplification of our current reality where a blizzard of devices and interfaces are comparatively complex and difficult to use—in fact, our current reality requires an entire set of professions and functions (variously known as usability, ergonomics, formative evaluation, interface design, parts of documentation, parts of customer support, etc.) to deal with the resulting complexities and user difficulties.

This Alternate Reality TPM stack view includes: (1) Direct Teleportal Use 180 employs the consistent TPU Interface 183 across LTPs (Local Teleportals) 132 180 184, MTPs (Mobile Teleportals) 132 180 184, and RTPs (Remote Teleportals) 133 180 184; (2) Virtual Teleportal (VTP) use 180 184 employs an adaptable subset of the consistent TPU Interface 183 and is used on AIDs/AODs (Alternate Input Devices/Alternate Output Devices) 134 180 184 as described elsewhere (it is worth noting that whether a VTP can run on each of these AID/AOD devices depends on the functions and capabilities of each AID/AOD device; and when it can run only an adapted subset of VTP capabilities only some TP features may be available—and those features would employ a subset of the Consistent TPU Interface 183); (3) Remote Control Teleportaling (RCTP) use 180 employs an adaptable subset of the consistent TPU Interface 183 and is used on Subsidiary Devices 140 180 184 as described elsewhere (it is worth noting that whether an RCTP can run on each of these Subsidiary Devices depends on the functions and capabilities of each Subsidiary Device; and when it can run only an adapted subset of RCTP capabilities only some TP features may be available—and those features would employ a subset of the Consistent TPU Interface 183); (4) Devices In Use (DIU) 180 employs an AKM (Active Knowledge Machine) subset of the consistent TPU Interface 183 and is used on DIU's 135 180 184 or on Intermediary Devices 135 180 184 as described elsewhere (such as in the AKM starting in FIG. 193 and elsewhere; it is worth noting that the AKM subset of the adaptable TPU Interface 183 varies considerably by the functions and capabilities of each Device In Use and/or its Intermediary Device; and when it can run only an adapted subset of RCTP capabilities only some TP features may be available—and those features would employ a subset of the Consistent TPU Interface 183); (5) Administration 180 of one's User Profile 181, account(s), subscription(s), membership(s), settings, etc. (such as of the TPU 131 136 139 180; TPN 131 136 139 180; etc.) employs the consistent TPU Interface 183 when said Administration 180 is done by means of a TP Device such as LTPs (Local Teleportals) 132 180 184, MTPs (Mobile Teleportals) 132 180 184, and RTPs (Remote Teleportals) 133 180 184; it employs an adaptable subset of the consistent TPU Interface 183 when Administration 180 is done by means of a VTP on an AID/AOD (Alternate Input Device/Alternate Output Device) 134 180 184.

The TPU's Adaptable Consistent Interface 183 is an intriguing possibility. Improved designs have replaced the leaders of entire industries such as when Microsoft locked down market control of the PC operating system and Office software industries by introducing Windows and Microsoft Office. For another example; Apple became a leader of the music, smart phone and related electronic tablet industries with its iPod/iPhone/iPad/iTunes product lines. These types of transformations are rare but possible, especially when a major company drives it. In a possible parallel business evolution, the advent of the Teleportal Utility's (TPU's) Adaptable Consistent Interface 183 9218 in FIG. 183 “User Experience” might provide one or more major companies with the business opportunity to attempt replacing current industry leaders in multiple business categories. They would offer users a new choice between today's blizzard of different and (in combination) hard to learn and confusing interfaces, or users could choose one TPU Adaptable Consistent Interface 183 9218 across a digital environment. Another competitive advantage is the current anti-customer business model of leading vendors who have saturated their markets (like Microsoft) and are unable to fill their annual coffers if they can't compel their customers to buy upgrades to products they already own—so in our current reality customers are required to buy treadmill versions of products they already own, with versions that often make their users feel more like rats on a wheel than the more advanced, more productive champions of the future depicted in their vendors' marketing. As a comparison, the Teleportal Utility's (TPU's) Adaptable Consistent Interface 183 is kept updated to fit a plurality of users' preferences and devices, as described elsewhere.

In summary, with one TPU Adaptable Consistent Interface 183 and a set of main types of connections 180, users are able to learn and productively utilize the TP Devices environment 131 132 133 134 140 136 137 138 139, including Virtual Teleportals 134 140 on AIDs/AODs, and with Remote Control of Subsidiary Devices 140. With this type of Alternate Reality TPM departure possible, is it any wonder why the “Alternate Reality” chose this simpler path, and chose to invent around the bewildering user interfaces problems of our current reality?

Some pictorial examples are illustrated in FIG. 10, “Summary of TPM Connections and Interactions.” These reverse the Stack View in FIG. 9 by showing the TP Devices depicted in FIG. 8, but listing each device's types of connections and interactions. In brief, this example demonstrates how a Consistent TPU Interface 183 (and FIGS. 183 through 187 and elsewhere) is displayed to users 150 152 154 157 159 across the TP Devices environment 160 151 153 155 156 158 166 161 162 163 164 165 167. In some examples users may enter the TP Devices environment by using an (1) LTP 151 or an MTP 151, (2) a RTP 153, (3) an AID/AOD 155, (4) Devices In Use 158, or for (5) Administration 157.

In each of these cases: (1) When a user 150 makes direct use of a Local Teleportal (LTP) 151 or a Mobile Teleportal 151 the user employs the Consistent TPU Interface 183; when said user 150 employs the LTP 151 or MTP 151 to control a Subsidiary Device 166 161 162 163 164 165 the user employs Remote Control Teleportaling (RCTP) 180 which is an adaptable subset of the consistent TPU Interface 183 (it is worth noting that whether an RCTP can run on each of these Subsidiary Devices depends on the functions and capabilities of each Subsidiary Device; and when it can run only an adapted subset of RCTP capabilities only some TP features may be available—and those features would employ a subset of the Consistent TPU Interface 183); (2) When a user 152 makes direct use of a Remote Teleportal (RTP) 153 the user employs the Consistent TPU Interface 183; when said user 152 employs the RTP 153 to control a Subsidiary Device 166 161 162 163 164 165 the user employs Remote Control Teleportaling (RCTP) 180 which is an adaptable subset of the consistent TPU Interface 183 (it is worth noting that whether an RCTP can run on each of these Subsidiary Devices depends on the functions and capabilities of each Subsidiary Device; and when it can run only an adapted subset of RCTP capabilities only some TP features may be available—and those features would employ a subset of the Consistent TPU Interface 183); (3) When a user 154 makes direct use of an Alternate Input Device/Alternate Output Device (AID/AOD) 155 because it may have a plurality of Teleportaling features built into it the user may employ the Consistent TPU Interface 183 for those direct Teleportaling features if that device's vendor also adopts the Consistent TPU Interface 183 for those Teleportaling features; when said user 154 employs an AID/AOD 155 by means of a Virtual Teleportal (VTP) 180 that VTP is an adaptable subset of the consistent TPU Interface 183 as described elsewhere (it is worth noting that whether a VTP can run on each of these AID/AOD devices depends on the functions and capabilities of each AID/AOD device; and when it can run only an adapted subset of VTP capabilities only some TP features may be available—and those features would employ a subset of the Consistent TPU Interface 183); when said user 154 employs an AID/AOD 155 by means of a Virtual Teleportal (VTP) 180 that may be used to control a Subsidiary Device 166 161 162 163 164 165 by means of Remote Control Teleportaling (RCTP) 180 which is an adaptable subset of the consistent TPU Interface 183 (it is worth noting that whether a combined VTP and RCTP can run on each of these Subsidiary Devices depends on the functions and capabilities of each Subsidiary Device; and when it can run only an adapted subset of VTP and RCTP capabilities only some TP features may be available—and those features would employ a subset of the Consistent TPU Interface 183); (4) When a user 159 makes direct use of TPU's Active Knowledge Instructions (AKI) and/or Active Knowledge (AK) on a Device In Use (DIU) 158 the user may employ the Consistent TPU Interface 183 which contains an adaptable AKM interface for said AKM uses 159 158 if that device's vendor also adopts the Consistent TPU Interface 183 for said device's AKM deliveries and interactions (it is worth noting that whether a DIU can run an AKM interaction and display the AKI/AK depends on the functions and capabilities of each DIU; and when it can run only an adapted subset of AKM capabilities only some AKI/AK may be available—and those features would employ a subset of the AKM portion of the Consistent TPU Interface 183); when a user 159 employs an intermediary device (in some examples an MTP 151, in some examples an AID/AOD 155, etc.) for an Active Knowledge Machine interaction on behalf of a Device In Use 158, the user employs the Consistent TPU Interface 183 which contains an adaptable AKM interface for said AKM uses 159 158; (5) When a user 157 administers said user's 157 profile 181, account(s), subscription(s), membership(s), settings, etc. (such as of the TPU 167 156; TPN 156 167; etc.) the user may employ the Consistent TPU Interface 183 when said Administration 157 is done by means of a TP Device such as LTPs 151, MTPs 151, and RTPs 153; said user 157 employs an adaptable subset of the Consistent TPU Interface 183 when Administration 157 is done by means of a VTP on an AID/AOD 155.

Again, the range of TP Devices 160 151 153 155 158 156 167 166 and types of user connections 150 152 154 157 159 employ one Consistent TPU Interface 183, which is customizable and adaptable by means of subsets to various AID/AOD devices 155, Subsidiary Devices 166, and Devices In Use 158 as described in FIGS. 183 through 187 and elsewhere. This means a user can learn just one interface and then manage and control the ARTPM's range of features and devices, as well as subsidiary devices. This Alternate Reality is designed as a radical simplification of our current reality which requires multiple professions, corporate functions and huge costs (such as parts of customer support, parts of documentation, usability, ergonomics, formative evaluation, etc.) to deal with the numerous user difficulties that result from today's inconsistent designs and complexities.

Logically Grouped List of ARTPM Components: To assist in understanding of the ARTPM (Alternate Reality Teleportal Machine) FIG. 11 through FIG. 16 provide a high-level logically grouped snapshot of some components in a list that is neither detailed nor complete. In addition, this list does not match the order of the specification. It does, however, provide some examples of a logical grouping of the ARTPM's components.

Turning now to FIG. 11, at the level of some main categories, in some examples an ARTPM 200 includes in some examples one or a plurality of devices 201; in some examples one or a plurality of digital realities 202; in some examples one or a plurality of utilities 203; in some examples one or a plurality of services and systems 204; and in some examples one or a plurality of types of entertainment 205.

Turning now to FIG. 12 in some examples ARTPM devices 211 include in some examples one or a plurality of Local Teleportals 211; in some examples one or a plurality of Mobile Teleportals 211; in some examples one or a plurality of Remote Teleportals 211; and in some examples one or a plurality of Universal Remote Controls 211. In some examples ARTPM subsystems 212 include in some examples superior viewer sensors 212; in some examples continuous digital reality 212; in some examples publication of outputs 212 such as in some examples constructed digital realities, in some examples broadcasts, and in some examples other types of outputs; in some examples language translation 212; and in some examples speech recognition 212. In some examples ARTPM devices access 213 includes in some examples RCTP (Remote Control Teleportaling) 213 which in some examples enables Teleportal devices to control and use one or a plurality of some networked electronic devices as subsidiary devices; in some examples VTP (Virtual Teleportal) 213 which in some examples enables other networked electronic devices to access and use Teleportal devices; and in some examples SD Servers (Subsidiary Device Servers) 213 which in some examples enables the finding of subsidiary devices in order in some examples to use the device, in some examples to use digital content that is on the subsidiary device, in some examples to use applications that run on the subsidiary device, in some examples to use services that a particular subsidiary device can access, and in some examples to use a subsidiary device for other uses.

Turning now to FIG. 13 in some examples ARTPM digital realities 220 include at a high level in some examples SPLS (Shared Planetary Life Spaces) 221, in some examples an ARM (Alternate Realities Machine) 222, in some examples Constructed Digital Realities 223: in some examples multiple identities 224; in some examples governances 225; and in some examples a freedom from dictatorships system 226. In some examples ARTPM SPLS (Shared Planetary Life Spaces) 221 include in some examples some types of digital presence 221, in some examples one or a plurality of focused connections 221, in some examples one or a plurality of IPTR (Identities, Places, Resources, Tools) 221, in some examples one or a plurality of directories 221, in some examples auto-identification 221, in some examples auto-valuing 221, in some examples digital places 221, in some examples digital events in digital places 221, in some examples one or a plurality of identities at digital events in digital places 221, and in some examples filtered views 221. In some examples an ARTPM ARM (Alternate Realities Machine) 222 includes in some examples the management of one or a plurality of boundaries 222 (such as in some examples priorities 222, in some examples and exclusions 222, in some examples paywalls 222, in some examples personal protection 222, in some examples safety 222, and in some examples other types of boundaries 222); in some examples ARM boundaries for individuals 222; in some examples ARM boundaries for groups 222; in some examples ARM boundaries for the public 222; in some examples ARM boundaries for individuals, groups and/or the public that include in some examples filtering 222, in some examples prioritizing 222, in some examples rejecting 222, in some examples blocking 222, in some examples protecting 222, and in some examples other types of boundaries 222; in some examples ARM property protection 222; and in some examples reporting of the results of some uses of ARM boundaries 222 with in some examples recommendations for “best boundaries” 222, and in some examples means for copying boundaries 222, and in some examples means for sharing boundaries 222. In some examples ARTPM Constructed Digital Realities 223 include in some examples digital realities construction at one or a plurality of locations where their source(s) are acquired 223; in some examples digital realities construction at a location remote from where source(s) are acquired 223; in some examples digital realities construction by multiple parties utilizing one or a plurality of the same sources 223; in some examples digital realities reconstruction by one or a plurality of parties who receive a previously constructed digital reality 223; in some examples broadcasting a constructed digital reality from its source 223; in some examples broadcasting a constructed digital reality from one or a plurality of construction locations remote from where source(s) are acquired 223; in some examples broadcasting one or a plurality of reconstructed digital realities from one or a plurality of reconstruction locations 223; in some examples one or a plurality of services for publishing constructed digital realities and/or reconstructed digital realities 223; in some examples one or a plurality of services for finding and utilizing constructed digital realities 223; in some examples one or a plurality of growth systems for assisting in monetizing constructed digital realities 223 such as providing assistance in some examples in revenue growth 223, in some examples in audience growth 223, and in some examples other types of growth 223. In some examples ARTPM multiple identities 224 include means for life expansion as an alternative for medical science's failure to produce meaningful life extension; in some examples by establishing and enjoying a plurality of identities and lifestyles in parallel such as in some examples public identities 224, in some examples private identities 224, and in some examples secret identities 224. In some examples ARTPM governances 225 are not governments and provide independent and separate means for various types of governance 225 such as in some examples self-governances by individuals 225; in some examples economic governances by corporations 225; and in some examples trans-boarder governances with centralized management that are based on larger goals and beliefs 225; and in some examples one or a plurality of governances may include an independent self-selected GRS (Governances Revenue System) 225. In some examples an ARTPM freedom from dictatorships system 226 includes means for individuals who live oppressed under one or a plurality of dictatorial governments to establish independent, free and secret identities 226 outside the reach of their oppressive government 226.

Turning now to FIG. 14 in some examples one or a plurality of ARTPM utilities 230 includes in some examples one or a plurality of infrastructure components 231; in some examples devices discovery and configuration 232 for one or a plurality of ARTPM devices; in some examples a common user interface for one or a plurality of ARTPM devices 233; in some examples a common user interface for one or a plurality of ARTPM devices access 233; in some examples one or a plurality of business systems 234; and in some examples an ecosystem 235 herein named “friendition.”

Turning now to FIG. 15 in some examples one or a plurality of ARTPM services and systems 240 include in some examples an AKM (Active Knowledge Machine) 241, in some examples advertising and marketing 242, and in some examples optimization 243. In some examples an ARTPM AKM (Active Knowledge Machine) 241 includes in some examples recognition of user needs during the use of one or a plurality of some networked electronic devices, with automated delivery of appropriate know-how and other information to said user at the time and place it is needed 241; in some examples other AKM delivered information includes “what's best” for the user's task 241; in some examples other AKM delivered information includes means to switch to “what's best” for the user's task 241 such as in some examples different steps 241, in some examples a different process 241, in some examples buying a different product 241, and in some examples making other changes 241; in some examples an AKM may provide a usage-based channel for in some examples advertising 241, in some examples marketing 241, and in some examples selling 241; in some examples an AKM includes multi-source(s) entry it's delivered know-how by one or a plurality of sources 241; in some examples an AKM includes optimization to determine the best know-how to deliver 241; in some examples an AKM includes goals-based reporting 241 such as in some examples dashboards 241, in some examples recommendations 241, in some examples alerts 241, and in some examples other types of actionable reports 241; in some examples an AKM includes self-service management of settings and/or controls 241; in some examples an AKM includes means for improving the use of digital photographic equipment 241. In some examples an ARTPM includes advertising and marketing 242 including in some examples advertiser and sponsor systems 242; and in some examples one or a plurality of growth systems for in some examples tracking and analyzing appropriate data, in some examples providing assistance determining revenue growth opportunities, in some examples determining audience growth opportunities, and in some examples determining other types of growth opportunities. In some examples an ARTPM includes optimizations 243 including in some examples means for self-improvement of one or a plurality of its services 243; in some examples means for determining one or a plurality of types of improvements and making visible to one or a plurality of users in some examples results data 243, in some examples “what works best” data 243, in some examples gap analysis between an individual's performance and average “best performance” 243, in some examples alerts 243, and in some examples other types of recommendations 243; in some examples optimization reporting 243 such as in some examples reports 243, in some examples dashboards 243, in some examples alerts 243, in some examples recommendations 243, and in some examples other means for making visible both current performance and related data such as in some examples comparisons to and/or gaps with current performance 243; in some examples optimization distribution 243 such as in some examples enabling rapid switching to “what works best” 243, and in some examples enabling rapid copying of one or a plurality of versions of “what works best” 243.

Turning now to FIG. 16 in some examples one or a plurality of types of ARTPM entertainment(s) 250 include in some examples traditional licensing 251, in some examples ARTPM additions to traditional types of entertainment 252, and in some examples one or a plurality of new forms of online entertainment 253 that blend online entertainment games with the real world. In some examples an ARTPM includes entertainment licensing 251 that in some examples encompasses traditional licensing for use of one or a plurality of ARTPM components in traditional entertainment properties 251, in some examples traditional licensing for use of one or a plurality of ARTPM components in commercial properties 251. In some examples an ARTPM includes technology additions to traditional types of entertainment 252 such as in some examples digital presence by one or a plurality of digital audience members at digital entertainment “event's” 252; in some examples constructed digital realities that provide the “world” of a specific entertainment property 252; in some examples various ARTPM extensions to traditional entertainment properties 252 and/or entertainment series 252 such as in some examples novels 252, in some examples movies 252, in some examples television shows 252, in some examples video games 252, in some examples events 252, in some examples concerts 252, in some examples theater 252, in some examples musicals 252, in some examples dance 252, in some examples art shows 252, in some examples other types of entertainment properties 252. In some examples an ARTPM includes one or a plurality of RWE's (RealWorld Entertainment) 253 such as in some examples a multiplayer online game that includes known types of game play with virtual money, and also includes in some examples one or a plurality of real identities, in some examples one or a plurality of real situations, in some examples one or a plurality of real solutions, in some examples one or a plurality of real corporations, in some examples one or a plurality of real commerce transactions with real money, in some examples one or a plurality of real corporations that are players in the game, and in some examples other means that blend and/or integrate game worlds and game environments with the real world 253.

Look around from where you are sitting or standing. You are physically present and as walk around a room the view you see changes. If you stand so the closest window is about 3 to 4 feet away from you and look through it, then take two steps to the left what you see through the window changes; and if you take three or four steps to the right what you see through the window changes again. If you step forward you can see farther down and up through the window, and as you walk backward the view through the window narrows. Physical presence is immediate, simple and direct. As you move your view moves and what you see changes to fit your position relative to the physical world. This is not how a television screen works, nor is this not how a typical digital screen works. A screen shows you one fixed viewpoint and as you move around it stays the same. The same is true for a PC monitor, a handheld tablet's display, or a cell phone's screen. As you move relative to the screen the screen's view stays the same because your only “presence” is your physical reality, and there is no “digital reality” or “digital presence”—your screens are just static screens within your physical reality, so your actions are not connected to any “digital place.” Your TV, PC, laptop, netbook, tablet, pad and cell phone are just screens, not Teleportals.

Teleportal use introduction: Now imagine that you are looking into a Teleportal which is a digital device whose display in some examples is about same size and shape as the physical window you were just standing in front of, the window that you were looking through. Also imagine that you have one or a plurality of personal identities, as described elsewhere. Also imagine that each identity has one or a plurality of Shared Planetary Life Spaces (SPLS's), as described elsewhere. You are logged in as one of your identities, and have one of your SPLS's open. Across the bottom of the Teleportal you can see SPLS members who are present, each in a small video window. You are all present together but you have video only, not audio because they are all in the background, just as if they were on the same physical street with you but far enough away that you could not hear their conversations. When you want to talk or work with one of them you make that a focused connection, which expands its size and immediacy. Now you and that person are fully present together with a larger video image and two-way audio. You decide to stand while together and as you move around in front of the focused connection your view of that person, and your view into their place and background changes based upon your perspective and view into it, just as it you were looking in on them through a real physical window, plus your view has digital controls with added capabilities so that you have an (optional) “Superior Viewer” as described elsewhere. This is a single Teleportal “focused connection.” You can add another SPLS member to this focused connection and you have the option of keeping each focused connection visible and separate on your Teleportal, or combining them into a single combined focused connection. That combined connection extracts each of those two SPLS members from their focused connections, and combines them with or without a background. If you choose to include a background you select it—the background may be one of their real locations, it may be your location, or you may choose any real or virtual location in the world to which you have access. Similarly, the others present in the combined focused connection may choose the same background you select, or they may each choose any real or virtual background they prefer. If you want, any of you may add resources such as computing, presentations, data, applications, enterprise business systems, websites, web resources, news, entertainment, live places such as the world's best beachfront bars, stored shows, live or recorded events, and much more—as described elsewhere. Each of you has a range of controls to make these changes, along with the size of focused connection, it's placement on the Teleportal, or other alterations and combinations as described elsewhere.

ARTPM reality introduction: In the same way that your SPLS's members have presence in your Teleportal in real time (even if most or all of them are not in a focused connection), you are also a member of each of their SPLS's—and that gives you presence in their Teleportals simultaneously, and you are available for an immediate focused connection by any of them. Because you have presence a plurality of others' SPLS's and their Teleportals, your digital presence is simultaneous in multiple virtual places at one time. Because you have control over your presence in each of others' SPLS's, including attributes described elsewhere such as visibility, personal data, boundaries, privacy, secrecy, etc. your level of privacy is what you choose it to be and you can expand or contract your privacy at any time in any one or more SPLS's, or outside of those SPLS's by other means as described elsewhere. In some examples this is instantiated as an Alternate Realities Machine (herein ARM) which provides new systems for control over digital reality. Because you have control over each of your SPLS's boundaries as described elsewhere such as in the ARM, you may filter out what you do not like, prioritize what you include, and set up new types of filters such as Paywalls for what you are willing to include conditionally. This means that one person may customize the digital reality for one SPLS, and make each SPLS's reality as different as they want it to be from their other digital realities. Since each SPLS is connected to an identity, one person may have different identities that choose and enjoy different types of realities—such as family, profession, travel, recreation, sports, partying, punk, sexual, or whatever they want to be—and each identity and SPLS may choose privacy levels such as public, private or secret. This provides privacy choices instead of privacy issues, with self-controlled choices over what is public, what is private and what is secret. Similarly, culture is transformed from top-down imposition of common messages into self-chosen multiple identities, each with the different type(s) of digital boundaries, filters, Paywalls and preferences they want for that identity and its SPLS's. Thus, the types of culture and level of privacy in each digital reality is a reflection of a person's choices for each of his or her realities.

Optimization overlay: The ARTPM reverses the assumption that the primary purpose of networks is to provide connections and communications. It assumes that is secondary, and the primary purpose of networks is to identify behavior, track it and respond to success and failure (based on what can be determined). Tracked behaviors and their results are aggregated as described elsewhere, and reported both individually and collectively as described elsewhere, so the most successful behaviors for a range of goals is highly visible. Aggregate visibility provides self-chosen opportunities for individuals to advance rapidly, in some examples to “leap ahead” across a range of in some examples goals, in some examples device uses, in some examples tasks, etc. An Active Knowledge Machine, for one example, (herein AKM) delivers explicit “success guidance” to individuals at the point of need while they are doing a plurality of types of tasks. Thus, with an ARTPM some networks may start delivering human success so a growing number of people may achieve more of their goals, with the object of a faster rate of progress and growth.

Digital reality summary: In this new digital reality you simultaneously have presence in one or a plurality of digital locations as the one or multiple identities you choose to be at that moment, in the one or multiple Shared Planetary Life Spaces in which you choose to be present, in some examples with an ARM that enables setting its boundaries so that each reality is focused on what you want it to be, and in some examples with an AKM that keeps you informed of the most successful steps and options while you are doing tasks. With Teleportal controls you may include other IPTR (herein Identities [people], Places, Tools or Resources) by means of SPLS's, directories, the Web, search, navigation, dashboards [performance reporting], AKM (Active Knowledge Machine, described elsewhere), etc. to make them all or part of your focused Teleportal connections and your digital realities. When you identify a potentially more successful digital reality or option, and want to try it, the systems that provide those choices such as the ARM or AKM, also enable fast switching to the new option(s). At any one moment while you use and look through a Teleportal your view may change dramatically by your selection of background place, and by changing your physical juxtaposition to the Teleportal which responsively alters the view that it displays to you. Similarly, the views that others have of you may also be changed dramatically by their choices of their identities, SPLS's, background, goals, fast switching to various advances and their resulting digital realities—with their Teleportals views changing as they move around and look through their Teleportals. You are both present together in a larger “Expandaverse” of a growing number of digital realities that may be changed and advanced substantially by anyone at any moment.

Teleportal devices: In some examples it is an object of Teleportal devices to introduce a new set of networked electronic devices that are able to provide continuous presence in one or a plurality of digital realities (as described elsewhere), along with other features and operations (as described elsewhere).

FIG. 17, “Teleportal (TP) Devices Summary”: In some examples TP devices include Local Teleportals that are also referred to as LTP's (as described elsewhere), in some examples Mobile Teleportals that are also referred to as MTP's (as described elsewhere), in some examples Remote Teleportals that are also referred to as RTP's (as described elsewhere), in some examples Active Knowledge Machine devices that are also referred to as AKM devices (as described elsewhere), in some examples Alternate Input Devices/Alternative Output Devices that are also referred to as AID's/AOD's (as described elsewhere), in some examples TP Subsidiary Devices that are controlled by means of Remote Control Teleportaling that is also referred to as RCTP (as described elsewhere), in some examples Virtual Teleportal Devices that are other types of networked electronic devices that run a Virtual Teleportal that is also referred to as a VTP (as described elsewhere), in some examples a Teleportal Utility that is also referred to as a TPU (as described elsewhere), and in some examples other TP devices and connections that are described elsewhere.

FIG. 18, “Summary of Some TP Devices and Connections”: Some examples of TP devices are illustrated in an example focused connection that in this example includes an RTP, an LTP, various AID's/AODs, a universal remote control, a TPU, and some types of TP Servers; and in some other examples (as described elsewhere) may include other types of TP devices, features, functions, services, etc.

FIGS. 19 through 25; Some examples of LTP's are illustrated which include in some examples LTP window styles; in some examples LTP's hidden in a wall pocket so that it can be utilized as a digital window along with a real physical window; in some examples a plurality of shapes for LTP's; in some examples framed LTP's; in some examples a plurality of integrated LTP's that provide a single combined screen; in some examples TP walls that are constructed from a plurality of LTP's; and in some examples other LTP styles may be constructed from any combination of display, projector, interface, motion detection, and related components along with related processing (as described elsewhere).

FIG. 26, “Some MTP Style Examples”: Some examples of MTP styles are illustrated and described elsewhere (such as in FIG. 93) which include in some examples mobile phone styles; in some examples tablet and pad styles; in some examples portable communicators styles; in some examples wearable mobile device styles; in some examples Netbook or laptop styles; in some examples portable projector styles; and in some examples other MTP styles may be constructed from any combination of display, projector, interface, motion detection, and related components along with related processing (as described elsewhere).

FIG. 27, “Fixed RTP Examples,” and FIG. 28, “Mobile RTP Examples”: Some examples of RTP styles are presented in FIG. 27 and FIG. 28 and described elsewhere which include in some examples land-based RTP examples; in some examples urban places RTP examples; in some examples nature and wildlife-based RTP examples; in some examples wearable RTP examples; in some examples portable or transportable RTP examples; in some examples hidden or concealed RTP examples; in some examples public observation RTP examples; in some examples private property RTP examples; in some examples underwater RTP examples; in some examples high-rise building fixed-location aerial RTP examples; in some examples tall tree-based fixed-location aerial RTP examples; in some examples balloon or floating device-based aerial RTP examples; in some examples airplane or drone-based aerial RTP examples; in some examples helicopter or unmanned hovering device-based aerial RTP examples; in some examples ship or boat RTP examples; in some examples rocket, satellite or spaceship-based outer space RTP examples; in some examples whose appearance is likely to take time unmanned stationary or mobile devices on other planets, asteroids, comets, or other extraterrestrial location-based RTP examples.

Turning to a high-level view FIG. 17, “Teleportal (TP) Devices Summary,” this provides a fourth alternative to the typical user's viewpoint there are three main high-level device architectures. In the first and simplest (named “invisible OS”) the device's operating system is invisible, and a user simply turns on a device (like a television, appliance, etc.) then uses it directly then turns it off, and if the device connects to other devices (like a cable TV set-top box or DVR, it communicates over a network such as a public network like the Internet—but most devices are typically different in each of their interfaces, features and functions from other devices because differentiation is a competitive advantage, so this simpler architecture often yields a hailstorm of differentiated devices. In the second and most complex (named “visible OS”) the user must use the device's operating system to run the device, and Microsoft Windows is one example. A user turns on a PC which runs Windows, then the user employs Windows to load a stored program which in turn must be learned and used to perform its set of functions and then exited. To do something different a user loads a different stored program and learns it and uses it. To connect to and use a new type of electronic device the operating system must acquire its drivers, load its drivers and connect to the device; then it can use the device as part of its Windows environment. This “visible OS” provides robustness but it is also complex for users and many vendors as electronic devices add new features, and as the numbers and types of connectable electronic devices multiplies. In the third and most controlled (named “controlled OS”) a single company, such as Apple with its iPhone/iPod/iPad/iTunes ecosystem, maintains control over its devices and how they connect and are kept updated. From a user's view this is simpler but the cost is a premium price for customers and tight business and technical requirements for related vendors/developers, plus the controlling company receives a substantial percentage of the sales transactions that flow through its ecosystem—a percentage many times larger than any typical royalty would ever be.

Herein some examples in FIG. 17 illustrate a fourth high-level alternative (named “Teleportal Architecture” which is referred to here as “TPA”). In some examples a TPA includes a set of core devices that include LTP's (Local Teleportals) 1101, MTP's (Mobile Teleportals) 1106, and RTP's (Remote Teleportals) 1110. In some examples these core devices (LTPs, MTPs and RTPs) utilize one or a plurality of other networked electronic devices (named TP Subsidiary Devices 1132) by remote control, herein named RCTP (Remote Control Teleportaling) 1131 1132 1101 1106 1110. In some examples one or a plurality of networked electronic devices (named AID/AOD or Alternate Input Devices/Alternate Output Devices 1116) may run a VTP (Virtual Teleportal) 1138 1116 in which they connect to and run core devices (LTPs, MTPs and RTPs). In addition, an AID/AOD 1116 running a VTP 1138 may utilize a core device 1101 1106 1110 to control and use one or a plurality of subsidiary devices 1131 by means of RCTP 1131.

In some examples said TPA provides a fourth overall interconnection model for an environment that includes a plurality of disparate types of networked electronic devices: in some examples the core devices (LTPs, MTPs and RTPs) 1101 1106 1110 are the primary devices employed; in some examples the core devices (LTPs, MTPs and RTPs) 1101 1106 1110 use remote control (RCTP) 1131 to connect to and utilize one or a plurality of other networked electronic devices (TP Subsidiary Devices) 1132; in some examples one or a plurality of other types of networked electronic devices (AID's/AOD's) 1116 utilize a virtual teleportal (VTP) 1138 to connect to and use the core devices (LTPs, MTPs and RTPs) 1101 1106 1110; and in some examples the other networked electronic devices (AID's/AOD's) 1116 1138 may use the core devices (LTPs, MTPs and RTPs) 1101 1106 1110 to connect to and control the subsidiary devices (TP Subsidiary Devices by means of RCTP) 1131 1132.

In summary, this TPA model simplifies a broad evolution of a plurality of disparate networked electronic devices into core devices (LTPs, MTPs and RTPs) 1101 1106 1110 at the center with RCTP connections and control 1131 1132 going outward, and VTP connections and control 1116 1138 coming inward. Furthermore, a plurality of components (as described elsewhere) such as in some examples a consistent (and adaptive) user interface, simplify the connections to and use of networked electronic devices across the TPA.

In some examples of a TPA these devices (core devices, TP subsidiary devices, alternate input devices and alternate output devices) utilize one or a plurality of disparate public and/or private networks 1130; in some examples one or a plurality of these networks is a Teleportal Network (herein TPN) 1130; 1130; in some examples one or a plurality of these networks is a public network such as the Internet 1130; in some examples one or a plurality of these networks is a LAN 1130; in some examples one or a plurality of these networks is a WAN 1130; in some examples one or a plurality of these networks is a PSTN 1130; in some examples one or a plurality of these networks is a cellular radio network such as for mobile telephony 1130; in some examples one or a plurality of these networks is another type of network 1130; in some examples one or a plurality of these networks may employ a Teleportal Utility (herein TPU) 1130, and in some examples one or a plurality of these networks may employ in some examples Teleportal servers 1120, in some examples Teleportal applications 1120, in some examples Teleportal services 1120, in some examples Teleportal directories 1120, and in some examples other networked specialized Teleportal components 1120.

Turning now to a somewhat more detailed view FIG. 17, “Teleportal (TP) Devices Summary,” illustrates some examples of TP devices, which are described elsewhere. In some examples a TP device is a stand-alone unit that may connect over a network with one or a plurality of stand-alone TP devices. In some examples a TP device is a sub-unit that is an endpoint of a larger system that in some examples is hierarchical, in some examples is point-to-point, in some examples employs a star topology, and in some examples utilizes another known network architecture, such that the combination of TP device endpoints, switches, servers, applications, databases, control systems and other components combine to form part or all of an overall system or utility with a combination of methods and processes. In some examples the types of TP devices, which are described elsewhere, include an extensible set of devices such as LTP's (Local Teleportals) 1101, MTP's (Mobile Teleportals) 1106, RTP's (Remote Teleportals) 1110, AID's/AODs (Alternative Input Devices/Alternative Output Devices) 1116 connected by means of VTP's (Virtual Teleportals) 1138, Servers (servers, applications, storage, switches, routers, etc.) 1120, TP Subsidiary Devices 1132 controlled by RCTP (Remote Control Teleportaling) 1131, and AKM Devices (products and services that are connected to or supported by the Active Knowledge Machine, as described elsewhere) 1124. In some examples a consistent yet customizable user interface(s) is supported across TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 as described elsewhere; which provides similar and predictable accessibility to the functionality and capabilities provided by TP devices, applications, resources, SPLS's, IPTR, etc. In some examples voice recognition plays an interface role so that TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 and Teleportal usage may be controlled in whole or in part by voice commands; in some examples gestures such as on a touch screen or in the air by means of a hand-held or hand-attached controller plays an interface role so that TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 and Teleportal usage may be controlled in whole or in part by gestures; in some examples other known interface modules or capabilities are employed to control TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 and Teleportal usage as described elsewhere.

In some examples these devices and interfaces utilize one or a plurality of networks such as a Teleportal Network (TPN) 1130, LAN 1130, WAN 1130, IP (such as the Internet) 1130, PSTN (Public Switched Telephone Network) 1130, cellular 1130, circuit-switched 1130, packet-switched 1130, ISDN (Integrated Services Data Network) 1130, ring 1130, mesh 1130, or other known types of networks 1130. In some examples one or a plurality of TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 are connected to a LAN (Local Area Network) 1130 in which the extensible types of components in FIG. 17 reside on that LAN 1130. In some examples one or a plurality of TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 are connected to a WAN (Wide Area Network) 1130 in which the extensible types of components in FIG. 17 reside on that one said WAN 1130. Similarly, in some examples one or a plurality of TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 are connected to any of the other types of known networks 1130, such that the extensible types of components in FIG. 17 reside on one type of network 1130. In some examples two networks 1130 or a plurality of networks 1130 are connected such as for example the Internet, in some examples by converged communications links that support multiple types of communications simultaneously such as voice, video, data, e-mail, Internet phone, focused TP communications, fax, remote data access, remote services, Web, Internet, etc. and include various types of known interfaces, protocols, data formats, etc. which enable said internetworking.

FIG. 17 illustrates some examples of connections between LTP's 1102 1103 1104, in which connections between the LTP's 1102 1103 1104, and connections between LTP's and other TP devices 1106 1110 1138 1116 1120 utilizes one or a plurality of networks 1130, and in some examples one or a plurality of network resources 1120 1121 1122 1123. FIG. 17 also illustrates some examples of connections between MTP's 1107 1108 1109, in which connections between the MTP's 1107 1108 1109, and connections between MTP's and other TP devices 1101 1110 1138 1116 1120 utilizes one or a plurality of networks 1130, and in some examples one or a plurality of network resources 1120 1121 1122 1123. FIG. 17 also illustrates some examples of connections between RTP's 1111 1115, in which connections between the RTP's and other TP devices 1101 1106 1138 1116 1120 utilizes one or a plurality of networks 1130, and in some examples one or a plurality of network resources 1120 1121 1122 1123. FIG. 17 also illustrates some examples of connections, by means of one or a plurality of VTP's (Virtual Teleportals) 1131, between AID's/AOD's 1117 1118 1119, in which connections between the AID's/AOD's and other TP devices 1101 1106 1110 1120 1131 1132 utilizes one or a plurality of networks 1130, and in some examples one or a plurality of network resources 1120 1121 1122 1123. FIG. 17 also illustrates some examples of connections between network resources (in some examples a utility[ies], servers, in some examples applications, in some examples directory[ies], in some examples storage, in some examples switches, in some examples routers, in some examples other types of network services or components) 1121 1122 1123, in which connections between the network resources and other TP devices 1101 1106 1110 1138 1116 utilizes one or a plurality of networks 1130, and in some examples one or a plurality of other network resources 1120 1121 1122 1123. FIG. 17 also illustrates some examples of connections, by means of one or a plurality of RCTP's (Remote Controlled Teleportals) 1131, between TP devices 1101 1106 1138 1116 and TP subsidiary devices 1132 which in some examples include mobile phones 1133, other types of access devices 1133, cameras 1134, sensors 1134, other types of endpoint interfaces 1134, PCs 1135, laptops 1135, networks 1135, tablets 1135, pads 1135, online games 1135, Web browsers 1136, Web applications 1136, websites 1136, online televisions 1137, cable TV set-top boxes 1137, DVR's 1137, etc., in which in some examples the link to the TP subsidiary devices 1132 is direct, and in some examples the link to the TP subsidiary devices 1132 utilizes one or a plurality of networks 1130, and in some examples the link to the TP subsidiary devices 1132 utilizes one or a plurality of network resources 1120 1121 1122 1123. Similarly, in some examples one or a plurality of TP devices 1101 1106 1110 1116 1120 1124 1131 1132 are connected to any of the other types of TP devices 1101 1106 1110 1116 1120 1124 1131 1132 by means of networks 1130 as described elsewhere, such that the extensible types of components in FIG. 17 are connected to and interact with each other as described elsewhere. FIG. 17 also illustrates some examples of connections between AKM Devices (herein the Active Knowledge Machine, as described elsewhere) 1125 1126 1127, in which connections between the AKM Devices and AKM network resources 1121 1122 1123 utilizes one or a plurality of networks 1130, and in some examples one or a plurality of network resources 1120 1121 1122 1123.

The illustration in FIG. 17 merely illustrates some examples and actual configurations of TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 connected to one or a plurality of networks 1130 will utilize choices of devices, hardware, software, servers, operating systems, networks, and other components that employ features and capabilities that are described elsewhere, to fit a particular configuration and a particular set of desired features. In some examples multiple components and capabilities may be incorporated into a single hardware device, such as in some examples one TP device such as one RTP 1111 may control multiple subsidiary devices such as external cameras and microphones 1112 1113 1114; and in some examples one hardware purchase may include part or all of an individual's TP lifestyle that includes a server and applications 1121 with a specific set of TP devices 1102 1107 1111 1112 1138 1117 1131 1133 1134 1135 1137 1125 such that the combination of TP devices actually constitutes one hardware purchase that fulfills one person's chosen set of TP needs and TP uses. In some examples the TP devices 1101 1106 1110 1138 1116 1120 1124 1131 1132 and network(s) 1130 may be owned and managed in various ways; in some examples a customer may own and manage an entire system; in some examples a third-party(ies) may manage a customer owned system; in some examples a third-party(ies) may own and manage an entire system in which some or all TP devices and/or services are rented or leased to customers; in some examples any known business model for providing hardware, software, and services may be employed.

Summary of some TP devices and connections: Some examples in FIG. 18 illustrates and further describe TP devices described herein. Turning now to some examples in FIG. 18 an overall summary 305 includes a Local Teleportal (LTP) 430, a Remote Teleportal (RTP) 420, a Teleportal Network (TPN) 425, which includes a Teleportal Shared Spaces Network (TPSSN) 425 and in some examples a Teleportal Utility (TPU) 425. Though the ARTPM is not limited to the elements in this figure, the components included are utilized to connect a user 390 in real-time with the Grand Canal in Venice, Italy 310. Without needing multiple cameras this one wide and tall remote view 310 is processed by the Local Teleportal's 430 processor(s) 360 to provide a varying view 315 320 325 of the Grand Canal 310, along with audio that is played over the Local Teleportal's speaker(s) 375. The viewpoint place displayed in the Local Teleportal 370 reflects how the view in a real local window changes dynamically as a viewer(s) 390 moves. The view displayed in the LTP 370 is therefore dynamically based on the viewer's position(s) 385 390 395 relative to the LTP 370 as determined by the LTP's SVS (Superior Viewer Sensor) 365. In some examples when a viewer stands on the left 385 of the LTP 370, the SVS 365 determines this and the LTP's processor(s) 360 displays the appropriate right portion 325 of the Grand Canal 310. In some examples as the viewer 390 moves to the center in front of the LTP 370 when the viewer reaches the center 390 then center view 320 is displayed of the Grand Canal 310, and in some examples when the viewer moves to the right 395 then left view 315 is displayed from the Grand Canal 310.

In some examples a calculated view 395 with 315, 390 with 320, 385 with 325 that matches a real window is displayed in LTP 370 by means of a SVS 365 that determines the viewer(s) position relative to the LTP, and a CPM 360 that calculates the appropriate portion of the Grand Canal 310 to display. In one example the viewer 385 stands to the left of the Teleportal 370 so he can directly see and talk to the gondolier who is located on the right of this view of the Grand Canal 325; in some examples the remote microphones 330 are 3D or stereo microphones, in which case the viewer's speakers 375 may acoustically position the sound of the gondolier's voice appropriately for the position of the gondolier in the place being viewed.

To achieve this in some examples a Remote Teleportal (RTP) 420 is at an SPLS remote place and it comprises a video and audio source(s) 330, including a processor(s) 335 that provides remotely controlled processing of video, audio, data, applications 335, storage 335 and other functions 335; and a Remote Communications Module 337 that in some examples may be attached to the Internet 340, in some examples may be attached to a Teleportal Network 340, in some examples may be attached to a RTP Hub Server 350, or in some examples may be attached to another communications network such as a private corporate WAN (Wide Area Network) 340. In some examples a Remote Teleportal 322 may include devices such as a mobile phone 322 that is capable of delivering both video and audio, and is running a Virtual Teleportal 322, and in some examples is attached wirelessly to a cell phone vendor's network 340, in some examples is attached wirelessly (such as by Wi-Fi) to the Internet 340, in some examples is attached to satellite communications 340. In some examples said RTP device 420 may possess other features such as self-propelled mobility (on the ground, in the air, in the water, etc.); in some examples said RTP device 420 may provide multicast; in some examples said RTP device 420 may dynamically alter video and audio in real-time, or in near real-time before it is transmitted (with or without informing viewers 390 that such alteration has taken place).

In some examples video, audio and other data from said RTP 420 322 are received by either a Remote Teleportal Group Server (RTGS) 345 or a Teleportal Network Hub Server (TPNHS) 350. In some examples video, audio and other data from said RTP 420 322 may be processed by a Teleportal Applications Server (TPAS) 350. In some examples video, audio and other data from said RTP 420 322 are received and stored by a Teleportal Storage Server (TPSS) 350. In some examples the owner(s) of the respective RTPs 420 322, and each RTGS 345, TPNHS 350, TPAS 350, or TPSS 350 may be wholly public, wholly private or a combination of both. In some examples whether public or private the RTP's place, name, geographic address, ownership, any charges due for use, usage logging, and other identifying and connection information may be recorded by a Teleportal Index/Search Server (TPI/SS) 355 or by other TP applications 355 that provides means for a viewer 390 of a LTP 370 to find and connect with an RTP 420 322. In some examples said TPI/SS 355, TPAS 350, or TPSS 350 may each be located on a separate server(s) 355 or in some examples run on any Teleportal Server 345 350 355.

In some examples the LTP 370 has a dedicated controller 380 whose interface includes buttons and/or visual interface means designed to run an LTP that may be displayed on a screen or controlled by a user's gestures or voice or other means. In some examples the LTP 370 has a “universal remote control” 380 of multiple electronics whose interface fits a range of electronics. In some examples a variety of on-screen controls, images, controls, menus, or information can be displayed on the Local Teleportal to provide means for control or navigation 400 405. In some examples means provide access to groups, lists or a variety of small images of other places (which include IPTR [Identities/people, Places, Tools, Resources) directly available 400 405. In some examples the LTP 370 displays one or a plurality of currently open Shared Planetary Life Space(s) 400 405. In some examples the LTP 370 displays a digital window style such as overlaying a double-hung window 410 over the RTP place 310 315 320 325. In some examples the LTP 370 simultaneously displays other information or images (which include people, places, tools, resources, etc.) on the LTP 370 such as described in FIGS. 91, 92 and elsewhere.

In some examples an LTP 430 may not be available and an Alternate Input Device/Alternate Output Device (AID/AOD) 432 434 436 438 running a Virtual Teleportal (VTP) may be employed instead. In some examples an AID/AOD may be a mobile phone 432 or a “smart” phone 432. In some examples an AID/AOD may be a television set-top box 436 or a “smart” networked television 436. In some examples an AID/AOD may be a PC or laptop 438. In some examples an AID/AOD may be a wearable computing device 438. In some examples an AID/AOD may be a mobile computing device 438. In some examples an AID/AOD may be a communications-enabled DVR 436. In some examples an AID/AOD may be a computing device such as a netbook, tablet or a pad 438. In some examples an AID/AOD may be an online game system 434. In some examples an AID/AOD may be an appropriately capable Device In Use such as a networked digital camera, or surveillance camera 432. In some examples an AID/AOD may be an appropriately capable digital device such as an online sensor 432. In some examples an AID/AOD may be an appropriately capable web application 438, website 438, web widget 438, servlet 438, etc. In some examples an AID/AOD may be an appropriately capable application 438 or API that calls code that provides these functions 438. Since these do not have a Human Position Sensor 365 or a Communication/Processing Module 360 these do not automatically alter the view of the remote scene 310 in response to changes in the viewer's location. Therefore in some examples AIDs/AODs, utilize a default view, while in some examples AIDs/AODs, utilize manual means to alter the view displayed.

In some examples two or a plurality of LTP's 430 and AIDs/AODs provide TP Shared Planetary Life Spaces (SPLS) directly and with VTP's. This may be enabled if two or a plurality of Teleportals 430 or AIDs/AODs 432 434 436 438 are configured with a camera 377 and microphone 377 and the CPM 360 or VTP includes appropriate processing, memory and software so that it can provide said SPLS. When embodied and configured in this manner, both LTP's 430 and AIDs/AODs 432 434 436 438 can serve as a devices that provide Teleportal Shared Space(s) between two or a plurality of LTPs and AIDs/AODs 432 434 436 438.

LTP devices physical examples: Some examples in FIGS. 19 through 25, along with some examples in FIGS. 91 through 95 and elsewhere, illuminate and further describe some extensible Teleportal (TP) devices examples included herein. Turning now to some examples, TP devices may be built in a wide variety of devices, designs, models, styles, sizes, etc.

LTP “window” styles, audio and dynamic positioning: In some examples a single Local Teleportal (LTP) 451 in FIG. 19 shows that a Teleportal may be designed based on an underlying reconceptualization of a glass window the Window as a digital device that is a portal into “always on” Shared Planetary Life Spaces (SPLS), constructed digital realities, digital presence “events”, and other digital realities (as described elsewhere)—in this example the LTP has opened an SPLS that includes a connection to a view 450 that inside the Grand Canyon on the summer afternoon when this LTP is being viewed, with that view expanded to the entire LTP display—as if it were a real window looking out inside the Grand Canyon on that day. Because an LTP's display is a component of a digital device, in some examples the decorative window frame 451 452 may be digitally overlaid as an image over the SPLS connection 450. In some examples the decorative window frame's style, color, texture, material, etc. (in some examples wood, in some examples metal, in some examples composites, etc.) to create the appearance of different types of windows that provide presence at this remote place 450. In the examples in FIG. 19 two window styles are shown, a casement window style 451 and a double-hung window style 452. In each example an LTP may include audio. Since in this example the window like display components (eg, the frame and internal window styles) 451 452 are a digital image that is overlaid on the SPLS place, these can be varied at a command from the viewer to show this example LTP window as partially open, or completely open. The audio's volume can be raised or lowered automatically and proportionately as the window is digitally “opened” or “closed” to reflect the audio volume changes that would occur as if this were a real local glass window with that SPLS place actually outside of it. Another LTP component in some examples is illustrated in FIG. 19, an optional Superior Viewer Sensor (herein SVS, as described elsewhere) 453 that may be used to automatically adjust the view of a focused connection place in response to changes in the position of the viewer(s), so that this digital “window view” behaves in the same way as a real window's view changes as a viewer moves in juxtaposition to it—which may increase the feeling of presence in some examples with SPLS people, in some examples with SPLS places, etc.

Hide or show LTP over a local window, using a wall pocket: In some examples FIGS. 20 and 21 show the combination of a Local Teleportal 457 461 with a local glass window 456 by means of a wall pocket 458. In some examples a traditional local glass window 456 may have a “pocket door” space in the wall 458 along with a mechanical motor and a track that slides the LTP 457 461 in and out from the pocket in the wall 458. In this example the local glass window view 456 is on the third floor of an apartment in the northern USA during a winter day, with the local glass window 456 visible and the LTP 457 hidden in the pocket in the wall 458 by mechanically sliding it into this pocket (as shown by the dotted line 458). In some examples, as illustrated in FIG. 21, the single Local Teleportal (LTP) 461 is mechanically slid out from its wall pocket to cover the local glass window 460 with the LTP showing a TP connection to an SPLS place 461 that replaces the local glass window's view of the apartment building. This SPLS place 461 is inside the Grand Canyon during winter. In some examples the local glass window 460 is covered by the LTP 462 with an SPLS place visible 461. The dotted line 462 shows where the LTP is moved over the local glass window's view of an apartment building 456, whose local view was visible in a prior figure.

Multiple shapes for Teleportals: In some examples various shapes and styles may be employed for Teleportals, and some examples are illustrated in FIG. 22 which shows an SPLS place 450 inside the Grand Canyon during summer. In some examples local glass windows with various sizes and shapes can have a Local Teleportal (LTP) installed such as an arch shaped LTP 465 in some examples, an octagon shaped LTP 466 in some examples, and a circular shaped LTP 467 in some examples. Each of these example shapes, and other examples of shaped LTPs, may by accomplished by means such as (1) in some examples permanently mounting an LTP in a shaped local window 465 466 467, (2) in some examples permanently mounting an LTP in front of a shaped local window 465 466 467, (3) in some examples sliding a LTP in and out of a wall pocket 465 466 467 to use or not use the local window by means of a wall pocket and a mechanical motor and track, as illustrated in FIGS. 20 and 21. To display an SPLS place appropriately in a shaped LTP of varying size and shape, in some examples automated controls set an appropriate amount of zooming out or magnification in of the SPLS place. and/or manual controls. To display an SPLS place appropriately in a shaped LTP of varying size and shape, in some examples manual controls may be used to set an appropriate amount of zooming out or magnification in of the SPLS place. These examples are illustrated in FIG. 22 with the arch window slightly magnified 465, and the circular window is slightly zoomed out 467. Also in FIG. 22 the rectangular “H” above each of these three examples of differently shaped LTPs 468 represents an optional Superior Viewer Sensor (SVS) that adjusts the view in each LTP to match the position(s) of the viewer(s).

Local Teleportals in portable frames: In some examples the display(s) of a single Local Teleportal or a plurality of Local Teleportals 471 472 may be in a portable frame(s) 470, which in turn may be hung on a wall, placed on a stand, stood on a desk, or put in any desired location. As illustrated elsewhere, said outside “frame” 470 may be a digital border and/or decoration rather than part of the physical frame, while in some examples it may be an actual physical frame 470. If said outside frame 470 is digital, then various frame designs and colors may be stored and changed at will by means of local or remote processing, or retrieved on demand to provide a wider range of designs and colors, whether these look like traditional frames or are artistically creative digital alterations such as “torn edges” on the images displayed. In some examples an LTP that is in a portable frame may be in various sizes and orientations (in some examples portrait 471 or landscape 472, in some examples small or large, in some examples vertical or horizontal, in a larger example single or multiple views on one LTP, etc.) to fit each viewers' criteria in some examples, budget in some examples, available space in some examples, subject choices in some examples, etc. Because an LTP is a digital device that is a portal into “always on” Shared Planetary Life Spaces (SPLS), the LTP's in FIG. 23 show an example SPLS focused connection with a weather satellite that is located over a hurricane crossing Florida 471—as if the viewer were in space looking out on that scene. In some examples LTPs in portable frames may be used to observe a chain of retail stores, and a single LTP 472 is observing a franchisee's ice cream store from an SPLS that includes all of that chain's retail ice cream locations. Also in some examples one SPLS place may be expanded to fill the entire LTP display, as in these examples 471 472. Also in this figure, the rectangular “H” in the top of each of these two examples of framed LTPs 473 represents an optional Superior Viewer Sensor (SVS) that adjusts the view in each LTP to match the position(s) of the viewer(s).

Multiple Teleportals integrated into a single view: In some examples the displays of two or a plurality of Teleportals may be combined into one larger display. One example of this is illustrated in FIG. 24 which shows said integration in a manner that simulates the broad outside view that is observed from adjacent multiple local glass windows. In some examples the plurality of Teleportals may be touching to provide one panoramic view 481. In some examples the plurality of Teleportals may be slightly separated from each other as with some local glass window styles. Regardless of the physical shape(s) or style(s) of the said integrated Teleportals, together they may display one appropriately combined view 481, which in this example is from an SPLS place inside the Grand Canyon on that summer day, with that view expanded to the integrated LTP display—as if it were a real window present at that place on that day. In some examples the Teleportal's SPLS place and the full Teleportal display is chosen by a single viewer 482 using a handheld wireless remote control 483. In some examples the window perspective displayed is determined by a single Superior Viewer Sensor (SVS) 486 by means of algorithms calculated by one or a plurality of processors 484. In some examples the window perspective displayed is determined by a plurality of Superior Viewer Sensors (SVS) 487 488 489 by means of algorithms calculated by one or a plurality of processors 484. The local sounds in the Grand Canyon are played over the Teleportal's audio speaker(s) 485. In some examples the window style of the Teleportal 480 may be physical. In some examples the window style of the Teleportal 480 may be digitally displayed from multiple stored styles and overlaid over the SPLS place 481.

Larger integrated Teleportals/Teleportal Walls: In some examples known video wall technology may be applied so that multiple broader or taller Teleportals may span larger areas of a wall(s), room(s), stage(s), hall(s), billboard(s), etc. FIG. 25 illustrates some examples of larger integrated Teleportal Walls such as in some examples a 2-by-2 Teleportal 492, and in some examples a 3-by-3 Teleportal 493. The integration of multiple Teleportals into one “Teleportal Wall” is done by the processor(s) and software 484 in FIG. 24. Whether or not there should be one SVS (Superior Viewer Sensor) 486 or a plurality of SVS's 487 488 489 depends on the location of the Teleportal Wall 492 493: In some examples it may be in heavily trafficked public areas with moving viewers, in some examples sports bars whose SPLS's are located inside of football stadiums, baseball stadiums, and basketball arenas; in which cases these might not include a SVS. In some examples a Teleportal Wall 492 493 may be in a more one-on-one location which in some examples a family room and in some examples is a business office or cubicle; there one or a plurality of SVS(s) may be utilized to provide appropriate changes in the Teleportal Wall scene(s) displayed in response to the viewer(s) position(s). Alternatively, in some examples a projected LTP display may be utilized instead of a LTP wall, in which case the LTP's display size may be large and varying based on the viewers' needs or preferences, and the projection size may also be determined by the features and capabilities of the projection display device; similarly also, in some examples one or a plurality of SVS may be utilized with a projected LTP display.

MTP devices physical examples: Mobile Teleportals (MTPs) may be constructed in various styles, and some examples are illustrated in FIG. 26, “Some MTP (Mobile Teleportal) Styles,” which are based on a common factoring of digital devices into Teleportals with new features such as “always on” Shared Planetary Life Spaces (SPLS). Because each MTP utilizes the same technologies as other Teleportal devices but implements them in a variety of form factors and assemblages of hardware and software components, said MTP's provide parallel features and functionality to other Teleportal devices. Since each form factor continuously integrates processors that become faster and more powerful, more memory, higher bandwidth communications, etc., these MTP styles exemplify an evolving continuum of Teleportal capabilities. In the examples in FIG. 26 three mobile phone styles 501 are illustrated including a full-screen design 501 that operates by means of a touch screen and a single physical button at the bottom, a flip-open design 501 such as a Star Trek communicator, and a full-button design 501 that includes a keyboard with a trackball and function keys. In each example audio input and output parallels a mobile phone's microphone and speaker, including a speakerphone function for audio communications while viewing the screen. Alternately, audio input/output may be provided by wireless means such as a Bluetooth earpiece or headset, or by wired means such as a hands-free microphone/earpiece or headset. In each mobile phone-like design an optional Superior Viewer Sensor (herein SVS, as described elsewhere) 502 is located on an MTP (such as at its top in each of these examples), and the SVS may be used to automatically adjust the view of a focused connection place in response to changes in the position of a viewer.

In the examples in FIG. 26 three tablet and pad styles 504 are illustrated including a small pad design 504 that has multiple physical buttons and a trackball, a medium-sized tablet design 504 that has a stylus and a physical button, and a medium to large pad design 504 that operates by means of a touchscreen and a single physical button. In each example audio input and output parallels a mobile phone's microphone and speaker, including a speakerphone function for audio communications while viewing the screen. Alternately, audio input/output may be provided by wireless means such as a Bluetooth earpiece(s) or headset(s), or by wired means such as a hands-free microphone/earpiece or headset. In each tablet-like and pad-like design an optional Superior Viewer Sensor (herein SVS, as described elsewhere) 505 is located on an MTP (such as at its top in each of these examples), and the SVS may be used to automatically adjust the view of a focused connection place in response to changes in the position of a viewer.

In the examples in FIG. 26 two portable communicator styles 504 are illustrated including a wireless communicator 507 that has multiple buttons like a mobile phone, with audio input and output that parallels a mobile phone's microphone and speaker, including a speakerphone function for viewing the screen while communicating; or, alternatively, a base-station with a built-in speakerphone; or, alternatively, a wireless Bluetooth earpiece or headset. In this type of design an optional Superior Viewer Sensor (herein SVS, as described elsewhere) 502 is located at the top of this communicator's handset, and the SVS may be used to automatically adjust the view of a focused connection place in response to changes in the position of a viewer. Another example of a portable communicator style is an eyeglasses design 508 that includes a visual display with audio output through speakers next to the ears and audio input through a hands-free microphone. In this type of design an optional Superior Viewer Sensor (herein SVS, as described elsewhere) 502 is located to one side or both sides of said visual display and use eye tracking to automatically adjust the view of a focused connection place in response to changes in the directional gaze of a viewer.

In the examples in FIG. 26 two netbook and laptop styles 510 are illustrated including the equivalents of a full-featured laptop and a full-featured netbook that are, however, designed as Mobile Teleportals. In each example audio input and output parallels a netbook's or laptop's microphone and speaker for audio communications while viewing the screen. Alternately, audio input/output may be provided by wireless means such as a Bluetooth earpiece or headset, or by wired means such as a microphone or headset. In each netbook-like and laptop-like design an optional Superior Viewer Sensor (herein SVS, as described elsewhere) 505 is located on an MTP (such as at its top in each of these examples), and the SVS may be used to automatically adjust the view of a focused connection place in response to changes in the position of a viewer.

In the examples in FIG. 26 one portable projector style 514 is illustrated including a portable base unit 515 which provides Teleportal functionality and may be connected by cable or wirelessly with said projector 514 (or, alternatively, said projector and base station may be combined within one portable case). In said example portable projector's visual image 516 is displayed on a screen 516, a wall 516, a desktop 516, a whiteboard 516, or any desired and appropriate surface 516. In a portable projector audio input and output are provided by a microphone 518 and a speaker 518, including a speakerphone function for viewing the projected image 516 while communicating from a location(s) next to or near the projector. Alternately, audio input/output may be provided by means such as a wireless Bluetooth earpiece 518 or headset 518, or a wired microphone or hands-free microphone/earpiece. In each portable projector-like design an optional Superior Viewer Sensor (herein SVS, as described elsewhere) 517 is located on an MTP (such as at its top in this example), and the SVS may be used to automatically adjust the view of a projected connection place in response to changes in the position of a viewer.

RTP devices physical examples: Turning now to FIG. 27, “Fixed RTP (Remote Teleportal),” in some examples an RTP 2004 (as described elsewhere in more detail) is a networked and remotely controlled TP device that is a fixed RTP device 2004 that may operate on land 2011, in the water 2011, in the air 2011, or in space 2011. In some examples said the RTP 2004 is functionally equivalent to an LTP 2001 (including in some examples hardware, software, architecture, components, systems, applications, etc. as described elsewhere) or an MTP 2001 (as described elsewhere) but may have one or a plurality of additional sensors, an alternate power source(s), one or a plurality of (optional) means for mobility, communicate by means of any of a plurality of networks, and be controlled remotely over one or a plurality of networks 2005 with a controlling device(s) such as an LTP 2001, an MTP 2001, a TP subsidiary device 2002, an AID/AOD 2003 or by another type of networked electronic device. Alternatively, an RTP 2004 (as described elsewhere) may contain a subset of an LTP's functionality and have said subset controlled remotely in the same manner. Alternatively, an RTP 2004 (as described elsewhere) may contain a superset of an LTP's functionality by including additional types of sensors, means for mobility, etc. In addition, in some examples an RTP's 2004 remote control includes the operation of the device itself, its sensors, software means to process said sensors' input, recording means to store said sensors' data, networking means to transmit said sensors' raw data, networking means to transmit said sensors' processed data, etc. The illustrations in FIGS. 27 and 28 are therefore examples of RTP devices 2004 connected to one or a plurality of networks 2005 that utilize choices of devices, hardware, sensors, software, communications, mobility, servers, operating systems, networks, and other components that employ features and capabilities to each fit a particular configuration and set of desired features, and may be modified as needed to fit a plurality of purposes.

In some examples 2010 a Remote Teleportal (herein RTP) is fixed in a specific physical location, place, etc. and may also have a fixed orientation and direction so that it provides observation, data collection, recording, processing, and (optional) two-way communications in a preset fixed place or domain; or alternatively a fixed RTP may include remote controlled PTZ (Pan, Tilt, Zoom) so that the orientation and/or direction of said RTP (or of one of its components such as a camera or other sensor) may be controlled and directed remotely.

Said remote control of said fixed RTP 2004 2010 includes sending control signal(s) from one or a plurality of controlling devices 2001 2002 2003, receiving said control signal(s) by said RTP 2004 2015, processing said received control signal(s) by said RTP 2004 2015, then controlling the appropriate RTP function(s) 2004 2013 2014 2015 2016, component(s) 2004 2013, sensor(s) 2004 2013, communications 2004 2016, etc. of said RTP device 2004. In some examples said control signals are selectively transmitted 2001 2002 2003 to the RTP device 2004 where they are received and processed in order to control said RTP device 2004 which in some examples controls functions such as turning said device on or off 2004 2014, in some examples puts said device in or out of standby or suspend mode 2004 2014 (such as powering down a solar powered RTP from dusk until dawn), in some examples turning on or off one or a plurality of sensors 2004 2013 (such as in some examples using a camera for video observation 2004 2013, in some examples using only a microphone for listening 2004 2013, in some examples using weather sensors to determine local conditions 2004 2013, in some examples using infrared night vision (herein IR) 2004 2013 for nighttime observation, in some examples triggering some sensors or functions automatically such as with a motion detector 2004 2013, in some examples setting alerts 2004 2013 such as by specific sounds, specific identities, etc. In some examples said control signals are received and processed 2004 in order to control one or a plurality of simultaneous RTP processes such as constructing one or a plurality of digital realities (as described elsewhere) in real-time while transmitting said digital realities in one or a plurality of separate streams 2016. In some examples an RTP 2004 may be shared and the remote user(s) 2001 2002 2003 who are sharing said RTP device 2004 provide separate user control of separate RTP processing or functions, such as in some examples creating and controlling a separate digital reality(ies).

In the following fixed RTP examples various individual components, and combinations of components, are known and will not be described in detail herein. In some examples fixed RTP's 2004 are comprised of a land-based RTP device 2011 in a location such as Times Square, New York 2012; with sensors in some examples such as day and night cameras 2013 and microphones 2013; with power sources such as A/C 2014, solar 2014, and battery 2014; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as WiFi 2016, wired network 2016, WiMAX 2016; and with optional two-way video communications by means such as an LCD screen and a speaker. In some examples fixed RTP's 2004 are comprised of a land-based RTP device 2011 in a nature location such as an Everglades bird rookery 2012; with sensors in some examples such as day and night cameras 2013, microphones 2013, motion detectors 2013, GPS 2013, and weather sensors 2013; with power sources such as solar 2014, and battery 2014; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as satellite 2016, WiMAX 2016, cellular radio 2016, etc. In some examples fixed RTP's 2004 are comprised of a land-based RTP device 2011 in a location such any public or private RTP installation 2012; with sensors in some examples such as day and night cameras 2013, microphones 2013, motion detectors 2013, etc.; with power sources such as A/C 2014, solar 2014, and battery 2014; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as WiFi 2016, wired network 2016, WiMAX 2016, satellite 2016, cellular radio 2016; and with optional two-way video communications by means such as an LCD screen and a speaker.

In some examples fixed RTP's 2004 are comprised of a water-based RTP device 2011 in a location such as submerged on a shallow coral reef 2012; with sensors in some examples such as a camera 2013, microphone 2013, motion detector 2013, etc.; with power sources such as an above water solar panel 2014 (fixed on a permanent structure or floating on a substantial anchored buoy) and battery 2014; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as satellite 2016, cellular radio 2016, etc. In some examples fixed RTP's 2004 are comprised of a water-based RTP device 2011 in a water location such as tropical waterfall 2012, reef 2012 or other water feature 2012 as determined by a tropical resort hotel; with sensors in some examples such as a camera 2013, microphone 2013, motion detector 2013, GPS 2013, weather sensors 2013, infrared night camera 2013, etc.; with power sources such as A/C 2014, solar 2014, and battery 2014; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as WiFi 2016, WiMAX 2016, satellite 2016, cellular radio 2016, etc.

In some examples fixed RTP's 2004 are comprised of an arial-based RTP device 2011 in a location such as a penthouse balcony overlooking Central Park in New York City 2012; with sensors in some examples such as a camera 2013, microphone 2013, motion detector 2013, GPS 2013, weather sensors 2013, infrared night camera 2013, etc.; with a power sources such as A/C 2014; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as WiFi 2016 or wired networking 2016; etc. In some examples fixed RTP's 2004 are comprised of an arial-based RTP device 2011 in a location such as mounted on a tree trunk along the bank of the Amazon River in Brazil 2012, the Congo River in Africa 2012, or the busy Ganges in India 2012; with sensors in some examples such as a camera 2013, microphone 2013, motion detector 2013, GPS 2013, weather sensors 2013, night camera 2013, etc.; with power sources such as a mounted solar panel 2014 and battery 2014; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as WiFi 2016, WiMAX 2016, satellite 2016, cellular radio 2016, etc. In some examples fixed RTP's 2004 are comprised of an arial-based RTP device 2011 in a location such as a tower or weather balloon over a landmark or attraction 2012 such as a light tower over a sports stadium 2012, a weather balloon over a golf course during a PGA tournament 2012, a lighthouse over the rocky Maine shoreline 2012; with sensors in some examples such as a camera 2013, microphone 2013, motion detector 2013, GPS 2013, weather sensors 2013, infrared night camera 2013, etc.; with a power sources such as A/C 2014, solar 2014, battery 2014, etc.; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as WiFi 2016, WiMAX 2016, satellite 2016, cellular radio 2016, etc.

In some examples a fixed RTP's 2004 may be comprised of a space-based RTP device 2011 in a location such as aboard a geosynchronous weather satellite over a fixed location on the Earth 2012; with sensors in some examples such as a camera 2013, infrared night camera 2013, etc.; with a power sources such as solar 2014, battery 2014, etc.; with remote control 2001 2002 2003 of the RTP device 2015 including control of processing 2015 and applications 2015 (such as digital realities construction); and with communications such as satellite 2016, radio 2016, etc.

Turning now to FIG. 28, “Mobile RTP (Remote Teleportal),” in some examples an RTP 2024 (as described elsewhere) is a mobile and remotely controlled RTP device 2024 that may operate on the ground 2031, in the ocean 2031 or in another body of water 2031, in the sky 2031, or in space 2031. In some examples 2030 a mobile RTP has a remotely controllable orientation and direction so that it provides observation, data collection, recording, processing, and (optional) two-way communications in any part(s) of the zone or domain that it is directed to occupy and/or observe by means of its mobility.

Said remote control of said mobile RTP 2024 2030 includes sending control signal(s) from one or a plurality of controlling devices 2021 2022 2023, receiving said control signal(s) by said RTP 2024 2035, processing said received control signal(s) by said RTP 2024 2035, then controlling the appropriate RTP function 2024 2032 2033 2034 2035 2036, component 2024 2033, sensor 2024 2033, mobility 2024 2032, communications 2024 2036, etc. of said RTP device 2024. In some examples the remote control of said mobile RTP operates as described elsewhere, such as controlling one or a plurality of simultaneous RTP processes such as constructing one or a plurality of digital realities (as described elsewhere) in real-time while transmitting said digital realities in one or a plurality of separate streams 2036. In some examples a mobile RTP 2024 may be shared and the remote user(s) 2021 2022 2023 who are sharing said RTP device 2024 provide separate user control of separate RTP processing or functions, such as in some examples creating and controlling a separate digital reality(ies).

In the following mobile RTP examples various individual components, and combinations of components, are known and will not be described in detail herein. In some examples mobile RTP's 2024 are comprised of a ground-based mobile RTP device 2031 such as a remotely controlled telepresence robot on wheels 2032 in a location such as a company's offices 2032; with sensors in some examples such as one or a plurality of cameras 2033, speakers 2033 and microphones 2033; with power sources such as A/C 2034, solar 2034, and battery 2034; with mobility such as wheels for going to numerous locations throughout the offices 2032, wheels for accompanying people who are walking 2032, swivels for turning to face in different directions 2032, raising or lowering heights for communicating eye-to-eye 2032; with remote control 2021 2022 2023 of the mobile RTP device 2024 including control of processing 2035 and applications 2035 (such as digital realities construction); and with communications such as WiFi 2036, wired network 2036, WiMAX 2036; and with optional two-way video communications by means such as an LCD screen and a speaker. In some examples mobile RTP's 2024 are comprised of a ground-based mobile RTP device 2031 such as a remotely controlled vehicle mounted RTP 2032 in a location such as a company's trucks 2032, construction equipment 2032, golf carts 2032, forklift warehouse trucks 2032, etc.; with sensors in some examples such as one or a plurality of cameras 2033, speakers 2033, microphones 2033, GPS 2033, motion detectors 2033, infrared night cameras 2033, weather sensors 2033, etc.; with power sources such as said vehicle's electric power 2034, solar 2034, and battery 2034; with mobility such as said vehicle's mobility 2032 so that said vehicle(s) have tracking, observation, optional real-time communication, etc.; with remote control 2021 2022 2023 of the mobile RTP device 2024 including control of processing 2035 and applications 2035 (such as digital realities construction); and with communications such as WiFi 2036, WiMAX 2036, cellular radio 2036, satellite 2036, etc.; and with optional two-way video communications by means such as an LCD screen and a speaker. In some examples mobile RTP's 2024 are comprised of a ground-based mobile RTP device 2031 such as a remotely controlled personal RTP 2032 that is worn by an individual; with sensors in some examples such as one or a plurality of cameras 2033, speakers 2033, microphones 2033, GPS 2033, motion detectors 2033, infrared night cameras 2033, weather sensors 2033, etc.; with power sources such as solar 2034, battery 2034, A/C 2034; with mobility such as said individual's mobility 2032 so that said individual carries RTP tracking, observation, real-time communication, etc.; with remote control 2021 2022 2023 of the personal mobile RTP device 2024 including remote control of processing 2035 and applications 2035 (such as digital realities construction); and with communications such as WiFi 2036, WiMAX 2036, cellular radio 2036, satellite 2036, LAN port 2036, etc.; and with optional two-way video communications by means such as a speaker and an LCD screen or a projector.

In some examples mobile RTP's 2024 are comprised of an ocean-based mobile RTP device 2031 such as a remotely controlled ship or boat mounted RTP 2032 in one or more locations aboard a ship 2032; with sensors in some examples such as one or a plurality of cameras 2033, speakers 2033, microphones 2033, GPS 2033, motion detectors 2033, infrared night cameras 2033, weather sensors 2033, etc.; with power sources such as said vessel's electric power 2034, solar 2034, and battery 2034; with mobility such as said vessel's mobility 2032 so that said vessel has RTP tracking, observation, optional real-time communication, etc.; with remote control 2021 2022 2023 of the mobile RTP device 2024 including control of processing 2035 and applications 2035 (such as digital realities construction); and with communications such as WiFi 2036, WiMAX 2036, cellular radio 2036, satellite 2036, etc.; and with optional two-way video communications by means such as an LCD screen and a speaker. In some examples mobile RTP's 2024 are comprised of an ocean-based mobile RTP device 2031 such as a remotely controlled submarine (or underwater glider) mounted RTP 2032; with sensors in some examples such as one or a plurality of cameras 2033, microphones 2033, GPS 2033, motion detectors 2033, infrared night cameras 2033, weather sensors 2033, etc.; with power sources such as said submarine's electric power 2034, occasional solar 2034 (when surfaced), and battery 2034; with mobility such as said submarine's mobility 2032 so that said submarine has RTP tracking, observation, sensor data collection, etc.; with remote control 2021 2022 2023 of the mobile RTP device 2024 including control of processing 2035 and applications 2035 (such as digital realities construction); and with communications such as WiFi 2036, WiMAX 2036, cellular radio 2036, satellite 2036, etc.

In some examples mobile RTP's 2024 are comprised of an sky-based mobile RTP device 2031 such as a remotely controlled balloon or aircraft mounted RTP 2032 in one or more locations below a balloon 2032, or mounted in or on an aircraft 2032 (such as a radio controlled plane, a UAV, a drone, a radio controlled helicopter, etc.); with sensors in some examples such as one or a plurality of cameras 2033, microphones 2033, GPS 2033, motion detectors 2033, infrared night cameras 2033, weather sensors 2033, etc.; with power sources such as said balloon's equipment's or aircraft's battery or electric power 2034; with mobility such as said balloon's mobility 2032 or said aircraft's mobility 2032 so that said conveyance has mobile RTP tracking, observation, etc.; with remote control 2021 2022 2023 of the mobile RTP device 2024 including control of processing 2035 and applications 2035 (such as digital realities construction); and with communications such as WiFi 2036, WiMAX 2036, cellular radio 2036, satellite 2036, etc.

In some examples a mobile RTP's 2004 may be comprised of a space-based device 2024 in a location such as aboard a weather satellite orbiting the Earth 2032; with sensors in some examples such as a camera 2033, infrared night camera 2033, etc.; with power sources such as solar 2034, battery 2034, etc.; with remote control 2021 2022 2023 of the RTP device 2024 including control of processing 2035 and applications 2035 (such as digital realities construction); and with communications such as satellite 2036, radio 2036, etc.

TP devices architecture and processing: Today a few hundred dollars buys a graphics card (a GPU or Graphics Processing Unit) that is more powerful then most supercomputers from a decade ago. Just as graphical processing transformed “green screen” text interfaces into GUIs (Graphical User Interfaces), today's continuously advancing CPUs and GPUs turn photographs into real looking images that never existed; or turn photographs into many styles of paintings; or help design large buildings with architectural plans that are ready to be built; or model structures to test them for wind, sun and shadow patterns, neighborhood traffic, and much more; or play computer games with real-time cinema quality realism and surround sound; or construct digital realities; or design personal clothes online that will be delivered in less than a week; or show live football games on television with dynamic first down lines and information (like large “3rd and 10” signs) displayed on the ground under the 22 live football players moving on the field). To do this CPUs evolved into multi-core CPUs that are now routinely shipped in computers and computing devices of all sizes and types. Already starting, the design and shipment of devices that include multi-core GPU's, multiple GPU's and multiple co-processors has begun and greater GPU processing capabilities may be expected in the future. Already, some devices could include the hardware and software to transform physical reality into “digital reality” in real time—and this may become a commonplace mainstream capability in the future.

FIG. 29 through FIG. 35 provide some examples of components and features of extensible TP devices: FIG. 29, “High-level TP Device Architecture”: In the “mainframe era” of computing, the computing capacity of an entire mainframe computer is eclipsed by one of today's advanced laptop computers. In some examples a plurality of components, systems, methods, processes, technologies, devices and other means are combined in varying ways to form a TP device. FIG. 29 describes an architecture for combining the capacity of a plurality of devices within a single TP device including digital realities creation (as described elsewhere), with other communications, broadcasting, editing, and display capabilities with the capacity and features of a single TP device as described elsewhere.

FIG. 30, “TP Device Processing Location(s)”: In some examples the TP processing required (such as for a given video and/or audio synthesis or other TP processing as described elsewhere) is supported by a TP device, in which case it can be performed by said device. In some examples, however, the required TP processing is not supported by a given TP device in which case it is determined whether or not an appropriate remote TP processing resource is available, and if available said required TP processing can be performed on the remote TP resource with the output streamed to the TP device. However, if a remote TP resource is not available then the TP device's limits are applied to the TP device's processing so that only its limited processing capabilities are applied to produce the limited output that is displayed.

FIG. 31, “TP Device Processing Components Flow”: In some examples TP devices simultaneously receive from a plurality of sources and send to a plurality of recipients that can be in some examples one or a plurality of SPLS members; in some examples one or a plurality of IPTR; in some examples one or a plurality of focused connections; in some examples one or a plurality of broadcast sources; and in some examples one or a plurality of other types of networked electronic connections. In some examples TP devices simultaneously convert data received from said plurality of sources, as well as simultaneously convert data sent to said plurality of sources into an appropriate format(s) for internal processing. In some examples TP devices simultaneously synthesize and combine one or a plurality of digital realities (as described elsewhere). In some examples TP devices simultaneously generate and display one or a plurality of outputs in one or a plurality of formats on one or a plurality of local and/or remote displays, including in some examples storing said outputs for future use, in some examples for future broadcasts, in some examples for other purposes and functions. In some examples TP devices are under user control such that the various inputs, outputs, synthesis, editing, mixing, effects, displays and other functions may be varied and directed by a plurality of types of user controls. In some examples a plurality of user I/O devices may be utilized by a user during the use of a TP device. In some examples a plurality of storage means may be utilized by a TP device. In some examples a plurality of memory means may be utilized by a TP device. In some examples one or a plurality of CPUs, including in some examples multi-core CPUs, may be utilized by a TP device. In some examples a plurality of GPUs, including in some examples multi-core GPUs, may be utilized by a TP device. In some examples one or a plurality of subsystems may be utilized by a TP device.

FIG. 32, “TP Device Processing of Broadcasts”: In some examples a TP device may be utilized for watching one or a plurality of broadcast sources; in some examples for recording one or a plurality of broadcast sources; in some examples for digitally altering one or a plurality of live broadcasts; in some examples for digitally altering one or a plurality of recorded broadcasts; in some examples or utilizing parts or all of a live or recorded broadcast in a digital synthesis; in some examples for broadcasting a recorded broadcast; in some examples for broadcasting a digitally synthesized live or recorded broadcast; and in some examples for performing other functions as described herein.

FIG. 33, “TP Device Processing—Multiple/Parallel”: In some examples TP devices can process one or a plurality of simultaneous connections by means of a scalable plurality of in some examples simultaneous processes; in some examples simultaneous processing; and in some examples simultaneous connections.

FIG. 34, “Local and Distributed TP Device Processing Locations”: In some examples some or all TP device processing is performed by a sending TP device; in some examples some or all TP device processing is performed by a receiving TP device; in some examples some or all TP device processing is performed remotely such as by a third-party application or service or by a TP server or application on a network; in some examples TP device processing is distributed between two or a plurality of TP devices and/or third parties that are connected by means of one or a plurality of networks; and in some examples TP device processing is performed by a plurality of TP devices and/or third-parties such that different users see differently processed and differently constructed video and audio.

FIG. 35, “Device(s) Commands Entry”: Some examples illustrate part of the process of entering commands into TP devices, including a plurality of user I/O devices such as in some examples a pointing device, in some examples physical gestures, in some examples a trackball, in some examples a joystick, in some examples voice or speech (in some examples including speakers for audio feedback), and some examples a touch interface, in some examples a graphics tablet, in some examples a touchpad, in some examples of a remote control, in some examples a camera, in some examples a puck, in some examples a keyboard, in some examples they know their device such as a smart phone running a VTP, in some examples I tracking, and some examples a 3D gyroscopic mouse, in some examples a game pad, and some examples a balance board, in some examples simulated devices such as a steering wheel or sword or musical instrument, in some examples another type of I/O means. In some examples a new I/O means may be added; in some examples a new feature may be added to an existing I/O means; and in some examples a reconfiguration of I/O means may be performed.

Turning now to FIG. 29, “High-level TP Device Architecture,” TP device architecture refers to some examples of physical TP devices such as in some examples an LTP 1140; in some examples an MTP 1140; in some examples an RTP 1140; in some examples an AID/AOD 1140; in some examples a TP server 1140; in some examples a TP subsidiary device that is under RCTP control (remote control by a TP device) 1164 1166; in some examples any other extensible configuration of a TP device that includes sufficient physical components, as described elsewhere, to provide Teleportal connections 1140. The illustration in FIG. 29 may be implemented in some examples with any suitable specialized device, in some examples with a general purpose computing system, in some examples with a special-purpose computing system, in some examples with a combination of multiple networked computing systems, or in some examples with a any hardware configuration by which a TP device may be provided whether in a single device or including a distributed computing environment where various modules and functions are located in local and remote computer devices, storage, and media so that tasks are performed by separate devices and linked through a communications network(s). In some examples TP devices 1140 may include but are not limited to a customized special purpose device 1140, in some examples a distributed device with its tasks performed by two or a plurality of networked devices 1140, and in some examples another type of specialized computing device(s) 1140.

In some examples TP devices 1140 may be implemented as individually designed TP devices, in some examples as general-purpose desktop personal computers, in some examples as workstations, in some examples as handheld devices, in some examples as mobile computing devices, in some examples as electronic tablets, in some examples as electronic pads, in some examples as netbooks, in some examples as wireless phones, in some examples as in-vehicle devices, in some examples as a device that is a component of equipment, in some examples as a device that is a component of a system, in some examples as servers, in some examples as network servers, in some examples as mainframe computers, in some examples as distributed computing systems, in some examples as consumer electronics, in some examples as online televisions, in some examples as television set-top boxes, in some examples as any other form of electronic device. In some examples said TP device 1140 is physically located with a user who is in a focused connection; in some examples said TP device 1140 is owned by a user who is in a focused connection but is remote from said TP device and is utilizing it for processing; in some examples said TP device 1140 is owned by a third party such as a service and said TP device's processing is an element of said service; in some examples said TP device 1140 is an element of a network that is being utilized for a Teleportal connection; in some examples said TP device 1140 is at any network accessible location.

In some examples TP devices 1140 may include but are not limited to a high-level illustration of the use of said TP device 1140 to open SPLS(s) (Shared Planetary Life Spaces) presence connections (as described elsewhere in more detail) and focus TP connections (as described elsewhere in more detail). In some examples a first step is to open one or a plurality of SPLS's (Shared Planetary Life Spaces), a second step is to focus one or a plurality of TP connections with SPLS members, a third step is to add additional PTR to one or more focused TP connections, and a fourth or later step is to perform other TP functions as described elsewhere. The program(s), module(s), component(s), instruction(s), program data, user profile(s) data, IPTR data, etc. that enable operation of the TP device 1140 to perform said steps may be stored in local storage 1143 and/or remote storage 1143 and retrieved as needed to operate said TP device 1140. As SPLS's are opened, focused connections are made, IPTR added, or other functions utilized an output video is generated to include the appropriate participants as described elsewhere, and other context may be added to said output video such as a place(s), advertisement(s), content(s), object(s), etc. as described elsewhere; with said output video generated in some examples at one or a plurality of the participants' local TP devices 1140, in some examples at one or a plurality of their remote TP devices 1140, in some examples at a remote TP device that is an element of a network 1174, in some examples by a TP server or TP service that is attached to a network 1174, or in some examples by other means as described elsewhere. In some examples this enables a single TP device 1140 to provide the output video; and some examples this enables a plurality of TP devices 1140 to provide a plurality of output videos that are customized for different participants as specified by each participant either manually or automatically (as described elsewhere). In some examples participants utilize TP devices 1140 that contain the appropriate components and capabilities to produce output video; while in some examples one or a plurality of participants utilize TP devices that are able to communicate but are not able to produce output video (which is processed separately from their TP device) 1140; while in some examples one or a plurality of TP devices 1140 possess only limited capabilities such as in some examples decoding video or audio, in some examples decompressing video or audio, and in some examples generating a signal that is formatted for display on that particular TP device.

In some examples said TP device components include a plurality of known devices, systems, methods, processes, technologies, etc. which are constituents that are combined in varying new or known ways to form a TP device. In some examples TP devices 1140 may include but are not limited to a system bus 1146 that couples system components such as one or a plurality of processors 1148 1149 1150, memory 1142, storage 1143, and interfaces 1160 1161 that in turn connect user I/O devices 1141, subsidiary processors such as in some examples a broadcast tuner(s) 1161, in some examples a GPU (Graphics Processing Unit), 1161, in some examples an audio sound processor 1161, and in some examples another type of subsidiary processor 1161. In some examples the system bus 1146 may be of any known type of bus including a local bus, a memory bus or memory controller, and a peripheral bus; with some examples of known bus architectures including Microchannel Architecture (MCA) bus, Industry Standard Architecture (ISA) bus, Enhanced ISA (EISA) bus, Video Electronics Standards Association (VESA) local bus, Peripheral Component Interconnect (PCI) bus, or any known bus architecture.

In some examples said TP device 1140 may include but is not limited to a plurality of known types of computer readable storage media 1143, which may include any available type of removable or non-removable storage media, or volatile or nonvolatile storage media that may be accessed either locally or remotely including in some examples Teleportal Network servers or storage 1143, in some examples one or a plurality of other Teleportal devices' storage 1143, in some examples a remote data center(s) 1143, in some examples a Storage Area Network (SAN) 1143, or in some examples other remote information storage 1143. In some examples in some examples storage 1143 may be implemented by any technology and method for information storage such as in some examples computer readable instructions, in some examples data structures, in some examples program modules, or in some examples other data. In some examples computer storage media includes but is not limited to one or a plurality of hard disk drives 1143, in some examples RAM 1143, in some examples ROM 1143, in some examples DVD 1143, in some examples CD-ROM 1143, in some examples of other optical disk storage 1143, in some examples flash memory 1143, in some examples EEPROM 1143, in some examples other memory technology 1143, in some examples magnetic tape 1143, in some examples magnetic cassettes 1143, in some examples magnetic disk storage 1143, in some examples other magnetic storage devices 1143. In some examples storage 1143 is connected to the system bus 1146 by one or a plurality of interfaces 1160 such as in some examples a hard disk drive interface 1160 1161, in some examples an optical drive interface 1160 1161, in some examples a magnetic drive interface 1160 1161, in some examples another type of storage interface 1160 1161.

In some examples said TP device 1140 may include but is not limited to a control unit 1144 which may include components such as a basic input/output system (BIOS) 1145 that contains some routines for transferring information between elements of a TP device such as in some examples during startup. In some examples a control unit 1144 may include components such as in some examples an operating system 1145, control applications 1145, utilities 1145, application programs 1145, program data 1145, etc. In some examples said operating system 1145, control applications 1145, utilities 1145, application programs 1145, or program data 1145 may be stored in some examples on a hard disk 1143, in some examples in ROM 1142, in some examples on an optical disk 1143, in some examples in RAM 1142, in some examples in another type of storage 1144, or in some examples in another type of memory 1142.

In some examples said TP device 1140 may include but is not limited to memory 1142 which may include random access memory (RAM) 1142, in some examples read only memory (ROM) 1142, in some examples flash memory 1142, or in some examples other memory 1142. In some examples memory 1142 may include a memory bus, in some examples a memory controller 1160, in some examples memory 1143 may be directly integrated with one or a plurality of processors 1148 1149 1150, or in some examples another type of memory interface 1160.

In some examples said TP device's 1140 components are connected to the system bus 1146 by a unique interface 1160 or in some examples by an interface 1160 that is shared by two or a plurality of components 1160; and said interfaces may in some examples be a user I/O device interface 1160 1161, in some examples a storage interface 1160 1161, in some examples another type of interface 1160 1161. In some examples said TP device 1140 may include but is not limited to one or a plurality of user I/O devices 1141 which in some examples includes a plurality of input devices and output devices such as a mouse/mice 1141, in some examples a keyboard(s) 1141, in some examples a camera(s) 1141, in some examples a microphone(s) 1141, in some examples a speaker(s) 1141, in some examples a remote control(s) 1141, in some examples a display(s) or monitor(s) 1141, in some examples a printer(s) 1141, in some examples a tablet(s) or pad(s) 1141, in some examples a touchscreen(s) 1141, in some examples a touchpad(s) 1141, in some examples a joystick(s) 1141, in some examples a game pad(s) 1141, in some examples a wireless hand-held 3-D pointing device(s) or controller(s) 1141, in some examples a trackball(s) 1141, in some examples a configured smart phone(s) 1141, in some examples another type of user I/O device 1141. In some examples these user I/O devices are connected to the system bus 1146 by one or a plurality of interfaces 1160 such as in some examples a video interface 1160 1161, in some examples a Universal Serial Bus (USB) 1160 1161, in some examples a parallel port 1160 1161, in some examples a serial port 1160 1161, in some examples a game port 1160 1161, in some examples an output peripheral interface 1160 1161, in some examples another type of interface 1160 1161.

In some examples TP devices 1140 may include but are not limited to one or a plurality of user interface(s) components to select TP device options, control the opening and closing of SPLS's and/or their individual members, control focusing a connection and its individual attributes, control the addition and synthesis of IPTR such as in a focused connection, control the TP display(s), and control other aspects of the operation of said TP device 1140; and these controls may be included in any known or practical interface arrangement, layout, design, alignment, user I/O device, remote control of a Teleportal, etc. In addition, updates to TP device interfaces, options, controls, features, etc. may be downloaded and applied to said TP device 1140 in some examples automatically, in some examples periodically, in some examples on a schedule, in some examples by a user's manual control, or in some examples by any known means or process; and if downloaded said updates may in some examples be available and presented for immediate use, in some examples the user may be informed when said updates are made, in some examples the user may be asked to approve said updates before they are available for use, in some examples the user may be required to approve the downloading and installation of said updates, in some examples the user may be required to run a setup process to install an update, and in some examples any other known download and/or installation process may be utilized.

In some examples said TP device 1140 may include but is not limited to one or a plurality of processors 1148 1149 1150, such as in some examples a single Central Processing Unit (CPU) 1148, in some examples a plurality of processors 1148 1149 1150 which in some examples include one or a plurality of video processors 1150, in some examples include one or a plurality of audio processors 1149, in some examples include one or a plurality of GPUs (Graphics Processing Units) 1149 1150, and in some examples include a control CPU 1148 that provides control and scheduling of other processors 1149 1150. In some examples TP devices 1140 may include but are not limited to a supervisor CPU 1148 along with one or a plurality of co-processors 1149 1150 that are variable in number, selectable in use and coupled by a bus 1146 with the supervisor CPU 1148. In some examples the supervisor CPU 1148 and co-processors 1149 1150 employ memory 1142 to store portions of one or a plurality of video streams, video inputs, partially processed video, video mixes, video effects, etc. (in which the term “video” includes related audio). In some examples a supervisor application is run by the supervisor CPU 1148 to control each co-processor 1149 1150 to read a selected portion of the video temporarily stored in memory 1142; process it 1149 1150 such as by mixing, effects, background replacement(s), etc. as described elsewhere; and output it for display and/or transmission to a designated recipient(s). In some examples a supervisor application is run by the supervisor CPU 1148 to manage in some examples the user instructions for the video synthesis of focused connections such as the synthesis of the view(s) in a focused connection, in some examples the currently open SPLS's, in some examples one or a plurality of logged in identities for the current user, in some examples one or a plurality of focused TP connections, in some examples one or a plurality of PTR within those focused connections, in some examples dynamic changes in the current user's presence, in some examples dynamic changes in the presence of SPLS members, in some examples dynamic changes in the presence of participants in focused TP connections, and in some examples other aspects of the operation of said TP device 1140. In some examples the number of co-processors 1149 1150 is selectable; in some examples the number of video inputs is selectable such as how many PTR in which to add to a focused connection; in some examples the number of participants in each focused connection is selectable; and in some examples other aspects of the operation of said TP device 1140 and said focused TP connections are selectable.

In some examples TP devices 1140 may include but are not limited to utilizing one or a plurality of co-processors such as video processors 1150, audio processors 1149, GPUs 1149 1150 to synthesize one or a plurality of focused connections according to each focused connection's video/audio input and participant('s) selections, and (optionally) include PTR such as in some examples a place or context, or in some examples advertisements that are personalized and customized for each participant. In some examples video processing 1150 and/or audio 1149 may be applied separately to each video input such as in some examples personal images, in some examples place backgrounds, in some examples background objects, in some examples inserted advertisements, etc.; such as in some examples resizing, in some examples resolution, in some examples orientation, in some examples tilt, in some examples alignment with respect to each other, in some examples morphing into three dimensions, in some examples coloration, etc. in some examples video processing 1150 and/or audio processing 1149 may be applied separately to each focused connection such as in some examples dividing or subdividing one or a plurality of displays to present all or parts of each focused connection in a portion said display(s) as selected by each user of each TP device 1140.

In some examples TP devices 1140 may include but are not limited to using one or a plurality of audio processors 1149 to receive and process audio signals from each source in a focused connection(s), and utilize known means to generate a 3-D spatial audio signal for playback by the local TP device's 1140 speakers, whenever two or more speakers are present that may be utilized for audio. In this manner, the audio signal may be processed 1149 to match the processed video output 1150 such as, for example when a specific participant or object are displayed on the right side, the audio from said participant or object comes from a speaker(s) on the right side of the display, and the audio 1149 is balanced properly respective to the position of its source in the synthesized video 1150. Similarly, when a focused connection's context is made a separately received place, that place's audio may be played so that it sounds natural and audible at a volume that is appropriate for the synthesized position(s) of the participants in that place. Similarly, when other video inputs and sources are combined 1150, their respective audio may be processed 1149 so that upon playback, the audio matches the processed output video 1150.

In some examples said TP device 1140 may include but is not limited to one or a plurality of network interfaces 1154 1155 1156 for transferring data (including receiving, transmitting, broadcasting, etc.) between the TP device and in some examples a network 1174, in some examples other TP devices 1175 1176 1177 1178, in some examples Remote Control (RCTP) of TP Subsidiary Devices 1166 1167 1168 1169 1170 1171, in some examples an in-vehicle telematics device(s), in some examples a broadcast source(s) 1180, and in some examples other computing or electronic devices that may be attached to a network 1174. In some examples this connection can be implemented using one or a plurality of known types of network connections that are connected to the TP device 1140 in some examples any type of wired network 1174, in some examples any direct wired connection with another communicating device, in some examples any type of wireless network 1174, and in some examples any type of wireless direct connection 1174. In some examples this connection can be implemented using one or a plurality of known types of networks in some examples by means of the Internet 1174, in some examples by means of an Intranet 1174, in some examples by means of an Extranet 1174, in some examples by means of other types of networks as described elsewhere 1174. In some examples this connection can be implemented using one or a plurality of known types of networking devices that are connected to said TP device 1140 in some examples to a network and in some examples directly connected to any type of communicating device, such as in some examples a broadband modem, in some examples a wireless antenna, and some examples a wireless base station, in some examples a Local Area Network (LAN) 1174, in some examples a Wide Area Network (WAN) 1174, in some examples a cellular network 1174, in some examples an IP or TCP-IP network 1174, in some examples a PSTN 1174, in some examples any other known type of network. In some examples said TP device 1140 can be connected using one or a plurality of peer-to-peer environments which in some examples include real-time communications whereby connected TP devices 1140 1175 communicate directly in a peer-to-peer manner with each other.

In some examples said TP device 1140 may operate in a network environment with one or a plurality of networks 1174 using said network(s) to form a connection(s) with one or a plurality of TP devices 1175 such as in some examples an LTP 1176; in some examples an MTP 1176; in some examples an RTP 1177; in some examples an AID/AOD 1178; in some examples a TP server 1174; in some examples a TP subsidiary device that is under RCTP control (remote control by a TP device) 1164 1166 1167 1168 1169 1170 1171; in some examples any other TP connections between an extensible TP device 1140 and a compatible remote device through means such as a network interface(s) 1154 1155 1156 and a network(s) 1174. When a LAN network environment 1174 is used a network interface or adapter 1154 1155 1156 is typically employed for the LAN interface; and in turn, the LAN may be connected to a WAN 1174, the Internet 1174, or another type of network 1174 such as by a high bandwidth converged communication connection. When a directly connected WAN network environment 1174 is used, or a directly connected Internet network environment 1174 is used, or other direct means for establishing a communications link(s), a modem is typically employed; and said modem may be internal or external to said TP device 1140. When one or a plurality of broadcast sources 1180 are used, the components and processes are described elsewhere, such as in FIG. 32.

In some examples TP devices 1140 may include but are not limited to one or a plurality of network interfaces 1154 1155 1156 which each has a mux/demux 1151 1152 1153 that multiplexes/demultiplexes signals to and from the audio processor(s) 1149, video processor(s) 1150, GPU(s) 1149 1150, and CPU/data processor 1148; and in some examples each network interface 1154 1155 1156 has a format converter 1151 1152 1153 such as to convert from and to various video and/or audio formats as needed; and in some examples each network interface 1154 1155 1156 has an encoder/decoder (herein termed “Coder”) 1151 1152 1153 that decodes/encodes video streams to and from a TP device 1140, and in some examples one or a plurality of these conversion steps 1151 1152 1153 may be provided by one or a plurality of codecs. In turn, these varying combinations of network interfaces 1154 1155 1156, mux/demux 1151 1152 1153, format converter 1151 1152 1153, encoder/decoder 1151 1152 1153, and codec(s) 1151 1152 1153 provide input from and output to network(s) 1174.

In some examples said TP device 1140 may include but is not limited to one or a plurality of multiplexers and demultiplexers (referred to in the figure as “MUX”) 1151 1152 1153 which in some examples provides switching such as selecting one of many analog or digital signals and forwarding the selected signal into a single line; in some examples combining several input signals into a single output signal; in some examples enabling one line from many to be selected and routed through to a particular output; in some examples combining two or more signals into a single composite signal; in some examples routing a single input signal to multiple outputs; in some examples sequencing access to a network interface so that multiple different processes may share a single interface whether for receiving signals or for transmitting signals; in some examples converting analog signals to digital; in some examples converting digital signals to analog; in some examples providing filters so that output signals are filtered; in some examples sending several signals over a single output line such as with time division multiplexing; in some examples sending several signals over a single output line such as with frequency division multiplexing; in some examples sending several signals over a single output line such as with statistical multiplexing; and in some examples taking a single input line that carries multiple signals and separating those into their respective multiple signals.

In some examples said TP device 1140 may include but is not limited to one or a plurality of encoders/decoders (referred to in the figure as “Coder”) 1151 1152 1153 and/or decoders 1151 1152 1153 (referred to in the figure as “Coder”) which in some examples provides conversion of data from one format (or code) to another such as in some examples from an analog input to a digital data stream (A/D conversion, such as converting an analog composite video signal into a digital component video signal that includes a luminance signal, a color difference signal [Cb signal] and a color difference signal [Cr signal]); in some examples converts varied audio, video and/or text input into a common or standard format; in some examples compresses data into a smaller size for more efficient transmission, streaming, playback, editing, storage, encryption, etc.; in some examples simultaneously converts and compresses audio, video and/or text; in some examples converts signal formats that the TP device cannot process and encodes them in a format the TP device can process; in some examples provides conversion from one codec to another; in some examples taking audio and video data from a TP device and converting it to a format suitable for streaming, transmission, playback, storage, encryption, etc.; in some examples decoding data that has been encoded; in some examples decrypting data that has been encrypted; in some examples receiving a signal and turning it into usable data; and in some examples converting a scrambled video signal into a viewable image(s). In some examples said TP device 1140 may include but is not limited to one or a plurality of codecs (referred to in the figure as “Coder”) 1151 1152 1153 which in some examples provides encoding and/or decoding of one or a plurality of digital data streams and/or signals, such as for editing, transmission, streaming, playback, storage, encryption, etc.

In some examples said TP device 1140 may include but is not limited to one or a plurality of timers 1157 which in some examples are also known as sync generators; in some examples a timer counts time intervals and generates timed clock pulses used to synchronize video picture signals and/or video data streams; in some examples timing is used to synchronize various different video signals for editing, mixing, synthesis, output, transmission, streaming, etc.; in some examples timer pulses are utilized by one or a plurality of processors 1148 1149 1150 as timing instructions, as interrupt instructions, etc. to help control various steps in the editing, synthesis, mixing and/or effects process(es) such as mixing a plurality of different video signals from different sources and outputting a single synthesized and mixed video; in some examples to help control various steps in importing one or a plurality of special effects to a video; in some examples to help control various steps in outputting one or a plurality of videos into a single video output; in some examples to help control various steps in streaming one or a plurality of videos; in some examples to help control various other video timing or display functions.

In some examples said TP device 1140 may include subsystems 1158 1159 in which a subsystem is a specialized “engine” that provides specific types of functions and features including in some examples Superior Viewer Sensor (SVS) subsystem 1159; in some examples background replacement subsystem 1159; in some examples a recognition subsystem 1159 which provides recognitions such as faces, identities, objects, etc.; in some examples a tracking identities and devices subsystem 1159; in some examples a GPS and/or location information subsystem 1159; in some examples an SPLS/identities management subsystem 1159; in some examples TP session management subsystem that operates across multiple devices 1159; in some examples an automated serving subsystem such as a virtual concierge 1159, in some examples a selective cloaking or invisibility subsystem 1159, and in some examples other types of subsystems 1159 with each's associated functions and features. In some examples a subsystem may be within a single TP device; in some examples a subsystem may be distributed such that various functions are located in local and remote TP devices, storage, and media so that various tasks and/or program storage, data storage, processing, memory, etc. are performed by separate devices and linked through a communications network(s); and in some examples a parts or all of a subsystem may be provided remotely. In some examples one or a plurality of a subsystem's functions may be provided by means other than a device subsystem; in some examples one or a plurality of a subsystem's functions may be a network service; in some examples one or a plurality of a subsystem's functions may be provided by a utility; in some examples one or a plurality of a subsystem's functions may be provided by a network application; in some examples one or a plurality of a subsystem's functions may be provided by a third-party vendor; and in some examples one or a plurality of a subsystem's functions may be provided by other means. In some examples the equivalent of a device's subsystem may be provided by means other than a device subsystem; in some examples the equivalent of a device's subsystem may be a network service; in some examples the equivalent of a device's subsystem may be provided by a utility; in some examples the equivalent of a device's subsystem may be a remote application; in some examples the equivalent of a device's subsystem may be provided by a third-party vendor; and in some examples the equivalent of a device's subsystem may be provided by other means.

In some examples some TP devices 1140 may include but are not limited to AID's/AOD's that do not have nor do they require special internal components for processing Teleportal sessions, including opening and maintaining SPLS's, focusing one or a plurality of connections, or other types of Teleportal functions. AID's/AOD's may require nothing more then a wired and/or wireless network connection, and the ability to download and run a VTP (Virtual Teleportal) software application, in which case Teleportal processing is performed by a TP device that is attached to a network such as 1298 1280 1294 in FIG. 34. In some examples a user manually downloads a VTP application to an AID/AOD 1298 and runs it for each TP session; in some examples a user downloads a VTP application and saves it to the AID/AOD 1298 so it is available to be run in each time it is needed; in some examples a user downloads a VTP application and saves it and it's TP data locally on the AID/AOD 1298; in some examples a VTP stub application may be all that the AID/AOD can store, so when that is run the VTP is automatically downloaded, received and run at that time on the AID/AOD 1298; in some examples a VTP application or a VTP stub automatically downloads to the AID/AOD 1298 additional applications software and/or a user's TP data even if not requested by the user; in some examples a VTP is initiated, downloaded, installed and run on an AID/AOD 1298 by other methods and processes as described elsewhere.

TP device processing locations: FIG. 30, “TP Device Processing Location(s),” provides some examples of TP devices processing, which are exemplified and described elsewhere in more detail (such as some examples that start in FIG. 112). In some examples illustrated by FIG. 30 some or all TP device processing is performed within a single TP device; in some examples some or all TP device processing is performed by a receiving TP device; in some examples some or all TP device processing is performed remotely such as by a third-party application or service or by a TP server or TP application on a network; in some examples some or all TP device processing is distributed between two or a plurality of TP devices and/or third-parties that are connected by means of one or a plurality of networks; and in some examples TP device processing is performed by a plurality of TP devices and/or third-parties such that different users see differently processed and differently constructed video and audio.

Turning now to FIG. 30 which provides some examples of TP device processing locations, in some examples TP device processing includes opening an existing SPLS (Shared Space) 1201, and in some examples TP device processing includes focusing a connection with an identity who is a member of the opened SPLS 1201. In some examples the identity is in a SPLS but not an SPLS that is open 1202, then that SPLS may be opened 1202. In some examples the identity is not in a SPLS 1202 but said identity may be retrieved from a TPN Directory(ies) 1202 1203, or may be retrieved from a different (non-TPN) Directory(ies) 1202 1203. In some examples TP device processing proceeds by determining said identity's presence 1205 and current DIU (Device in Use) 1205, which includes retrieving the identity's delivery profile 1206 and DIU identification 1206 so that the identity's current available device(s) 1207 may be determined. In some examples if there are presence, connection or other rules for the SPLS of which the identity as a member 1208, then retrieve those rules 1209 and apply those rules 1209 (as described elsewhere). In some examples if there are presence, connection or other rules for that specific identity 1208, then retrieve those rules 1209 and apply those rules 1209 (as described elsewhere). In some examples if there are connection rules for the DIU 1210 or other rules for the DIU 1210, then retrieve those rules 1211 and apply those rules 1211. In some examples if there are DIU rules 1210, then retrieve those rules 1211 and apply those rules 1211. In some examples if there are DIU capabilities features 1210 or DIU capabilities limits 1210, then retrieve that DIU's features or limits 1211 and apply those to the focused connection 1211. In some examples the combination of various SPLS rules, identity rules, DIU features, etc. 1212 are utilized to process and display an identity's “presence” 1213 on a TP device, with storage of those various rules 1209 1211 1212, DIU capabilities 1211 1212, etc. until they are needed.

In some examples when that identity is focused 1214, the previously retrieved rules 1209 1211 1212, DIU capabilities 1211 1212, etc. are applied to the TP device's processing of the focused connection 1214. In some examples the required TP processing 1214 1215 is supported by the TP device 1215, then perform said processing on the TP device 1220 and display the processed output on the TP device 1221. In some examples the required TP processing 1214 1215 is not supported by the TP device 1215, then in some examples determine if an appropriate remote TP processing resource is available 1216, and in some examples if a TP processing resource is available 1217, then perform said processing on the TP resource 1217, stream the output to the TP device 1217, and display the remotely processed output on the TP device 1221. In some examples the required TP processing 1214 1215 is not supported by the TP device 1215, then in some examples determine if an appropriate remote TP processing resource is available 1216, and in some examples a remote TP processing resource is not available 1217, then do not perform said processing on the TP resource 1216 1218 and instead apply the TP device's limits to the input stream 1218, and display only what is possible from the unprocessed input on the TP device 1221.

In some examples the combination of various SPLS rules, identity rules, DIU features, etc. 1212 are utilized to process and display an identity's “presence” 1213 on a TP device, with storage of those various rules 1209 1211 1212, DIU capabilities 1211 1212, etc. until they are needed for a focused connection 1214. Until that identity is focused 1214 the presence of that identity is maintained on the TP device 1213. In some examples the current TP device user changes to a different TP device 1222, and in some examples the new TP device automatically reopens the currently open SPLS's 1201 which may in some examples include retrieving and applying SPLS rules 1208 1209, in some examples include retrieving and applying identity rules 1208 1209, in some examples include retrieving and applying DIU rules 1210 1211, in some examples include retrieving and applying DIU capabilities 1210 1211, and in some examples storing said retrieved data 1208 1209 1210 1211 with presence indications on a TP device. In some examples the current TP device user changes to a different TP device 1222, and in some examples the new TP device automatically refocuses a current focus connection with an identity 1201, which may in some examples include retrieving and applying the appropriate rules 1208 1209 1210 1211, in some examples retrieving and applying DIU capabilities 1210 1211, and in some examples applying said retrieved data 1208 1209 1210 1211 with the appropriate local TP processing 1215 1220 1221, and in some examples applying said retrieved data 1208 1209 1210 1211 with the appropriate remote TP processing 1216 1217 1221.

In some examples the remote DIU user has presence in an open SPLS 1213 and changes to a different DIU device 1222, and in some examples the new DIU device's rules and capabilities 1210 are retrieved and applied 1211 to that remote user's presence indication 1212 1213. In some examples the remote DIU user is in a focused connection 1214 and changes to a different DIU device 1222, and in some examples the new DIU device's rules and capabilities 1210 are retrieved and applied 1211 to that remote user's focused connection by means of DIU processing 1215 1220 1221, and in some examples applying said retrieved data 1208 1209 1210 1211 with the appropriate remote TP processing 1216 1217 1221.

TP device components processing flow: FIG. 31, “TP Device Components and Processing Flow,” provides some examples in which a plurality of components, systems, methods, processes, technologies, devices and other means are combined in varying ways to form a TP device. Various combinations increase or decrease the capabilities of different types of TP devices to meet the needs of different types of uses, customers, capabilities, features and functions as described elsewhere. In some examples said TP device synthesizes a plurality of output video picture/audio signals by mixing input video picture signals from three or more sources in any of a plurality of combinations, at one or a plurality of synthesis ratios, with one or a plurality of effects. In a preferred example said TP device comprises video/audio/data inputs 1235 with a plurality of inputs; tuners 1240, format conversion 1240 with a plurality of converters; controls 1250 with a plurality of manual user controls, stored controls and automated controls over signal selection, combination(s), mixing, effects, output(s), etc.; synthesis 1245 with a plurality of mixers, effects, etc.; output 1252 with a plurality of format converters, media switches, display processor(s), etc.; a timer/sync generator 1255 to provide clock pulses for syncing video inputs during synthesis and output; a display 1257 if the TP device is used directly by a user, or appropriate controls if the TP device is remote and its output is displayed locally; a system bus 1260; interfaces 1261 to a plurality of system components; a range of wired and wireless user I/O devices 1262 for a range of types of input/output as well as various types of TP device control; local storage 1263 that may optionally include remote storage 1263 and remote resources 1263; memory 1264 that includes both RAM memory 1264 and ROM memory 1264; one or a plurality of CPU's 1265 and co-processors 1272; and a range of subsystems 1277 that in some examples include one or a plurality of SVS (Superior Viewer Sensors), in some examples recognition, in some examples tracking, in some examples GPS/location information, in some examples session management, in some examples SPLS/identities management, in some examples in/out RCTP control, in some examples background replacement, in some examples automated serving, in some examples cloaking or invisibility, in some examples other types of subsystems. In some high-level examples said TP device receives three or more video inputs; performs processing of each video input according to control instructions; selects specific inputs for one or a plurality of syntheses; sets manual, stored or automated controls for each synthesis; synthesizes the selected inputs by means such as mixing designated inputs, combining, effects, etc. including applying control instructions corresponding to the predetermined synthesis; manually or automatically designates the output(s) from synthesis; and displays said output locally and/or remotely. In some high-level examples said TP device enables one or a plurality of desired syntheses combinations, ratios, effects, etc. between a plurality of video/audio picture signal inputs, with the desired synthesized output(s) for local and/or remote display and interactive real-time use.

In some examples a step is initial connection with external remote input sources which in some examples are SPLS members 1 through N 1230; in some examples are PTR (Places, Tools, Resources) 1 through N 1231; in some examples are TP focused connections 1 through N 1232, and in some examples are one or a plurality of broadcast sources 1233. In some examples a step is local inputs such as user I/O devices 1262 that may be connected by means of an interface 1261; which in some examples are one or a plurality of keyboards 1262, in some examples are one or a plurality of a mouse or other pointing device(s) 1262, in some examples are a touch screen(s) 1262, in some examples are one or a plurality of cameras 1262, in some examples are one or a plurality of microphones 1262, in some examples are one or a plurality of remote controls 1262, in some examples are a wireless control device like a tablet or pad 1262, in some examples are a hand-held pointing device(s) 1262, in some examples are a viewer detection sensor(s) 1262, etc. In some examples said TP device is shared 1259 and part or all of the TP device's functions are controlled by the remote user who is sharing it 1259; and in some examples said TP device is remotely controlled 1259 and part or all of the TP device's functions are controlled by the remote user who is controlling it 1259. In some examples a step includes receiving other user control sources and inputs by means such as a network interface 1235 1236 1237 1238 1239, a device interface 1261, or other means. In some examples a specific external input(s), device input(s), source(s) or online resource(s) will be new and not have previous settings for TP device processing associated with it, and in these cases default control settings 1250 are applied; in some cases different default settings 1250 may be pre-specified for various different types of inputs; in some cases a particular source type's default settings 1250 may be automatically copied from (or adapted from) other previous successful connections of that type. In some examples specific external and remote sources and inputs 1230 1231 1232 1233, or local sources and inputs 1262, may already be stored in memory 1264 or stored in storage 1263 for automatic TP device processing based upon previous control settings 1250; in some examples these may be previous individual focused connections 1232; in some examples these may be a specific category(ies) of connection(s) such as specific PTR (Place, Tool, Resource, etc. as described elsewhere) 1231 or types of PTR 1231; in some examples these may be a specific broadcast source 1233, or in some examples a specific category(ies) of broadcast sources 1233; in some examples these may be from a specific SPLS (Shared Planetary Life Space, as described elsewhere) 1230; in some examples these may be from a specific identity 1230; in some examples these may be from a specific originating group such as a particular company or organization 1230 or other source category 1230; in some examples these sources or inputs may have one or a plurality of other identifying attributes. In some examples once TP device processing has been performed, including the application of any controls 1250, said controls settings 1250 are automatically saved for automatic retrieval and reuse in the future during reconnection with that source and/or input. In some examples when any controls 1250 are used for TP device processing, the user may be asked whether or not to save the new control settings 1250 for future reconnections, and in some examples this request to save controls and/or settings may be asked only at a pre-specified time such as when a focused connection is made or when a focused connection is ended.

In some examples a TP device 1140 in FIG. 29 is connected to one or a plurality of servers by means of a network(s) 1174. In some examples said server(s) stores resources that are retrieved and used by the TP device during the operation of its various functions and features 1235 1240 1245 1252 1262 1265 1272 1277; in some examples said resources are programs; in some examples said resources are applications, in some examples said resources are services, in some examples said resources are control settings; in some examples said resources are templates; in some examples said resources are styles; in some examples said resources are data; in some examples said resources are recordings (which may include any type of stored videos, audio, music, shows, programs, broadcasts, events, meetings, collaborations, demonstrations, presentations, classes, etc.); in some examples said resources are advertisements; in some examples said resources are content that may be displayed during a focused connection; in some examples said resources are objects or images that may be displayed; in some examples other resources are stored and available for retrieval and use by a TP device. In some examples the TP device sends an automated and/or manual command to a server(s) to download one or a plurality of resources by means of a communications network(s) 1174 and network interface(s) 1235 1236 1237 1238 1239. In response to a TP device's 1140 command(s) a server(s) downloads the requested resource(s) to said TP device 1140 via a communication network(s) 1174. In some examples said TP device 1140 receives said requested resource(s) by means of its network interface(s) 1235 1236 1237 1238 1239, and stores it (them) in local storage 1263 and/or in memory 1264 as needed for each operation or function or feature 1235 1240 1245 1252 1262 1265 1272 1277.

In some examples a MIDI interface 1261 receives and delivers MIDI data (that is, MIDI tone information) from and to external MIDI equipment 1262 such as in some examples MIDI-compatible musical instruments (in some examples keyboards, in some examples guitars and string instruments, in some examples microphones, in some examples wind instruments, in some examples percussion instruments, in some examples other types of instruments), and in other examples MIDI-compatible gesture-based devices 1262 in which a user's motions generate MIDI data. In some examples tone data may utilize other standards than MIDI such as SMF or other formats, in which case a MIDI interface 1261 and MIDI equipment 1262 (including musical instruments, gesture-based devices, or other types of MIDI devices) conform to the data standard employed. In some examples a general-purpose interface 1261 may be employed instead of a MIDI interface 1261, such as in some examples a USB (Universal Serial Bus), in some examples RS-232-C, in some examples IEEE 1394, etc. and in each of these cases the appropriate data standard(s) is employed.

In some examples controls 1250 and/or controls' user interface 1250 include various options to set a range of stored and/or user editable parameters that are employed to control in some examples external inputs 1230 1231 1232 1233; in some examples local user I/O devices 1262; in some examples conversions 1240 1241 1242 1243; in some examples a tuner(s) 1240 1241 1242 1243 that selects and displays a broadcast(s) 1233; in some examples selection of inputs 1246; in some examples designation(s) of combinations 1247; in some examples synthesis during mixing 1248 such as ratios, sizes, positions, etc.; in some examples the selection and application of effects 1249 such as parameters that alter the way a selected effect alters an unprocessed input, a mixed combination or a synthesized video; in some examples the addition and specific uses of stored inputs 1263; in some examples the addition and use of other inputs; in some examples the addition and specific uses of streamed 1235 or stored 1263 external resources; in some examples during output 1253 1254 1256; in some examples to control parts or all of one or a plurality of TP displays 1256 1257; in some examples for other types of output control(s). In some examples various user I/O devices 1262 (including all forms of TP device inputs and outputs) may include their respective specialized control(s) interface(s) with their respective buttons, sliders, physical or digital knobs, connectors, widgets, etc. for utilizing each I/O device's controls by means such as in some examples selecting; in some examples finding; in some examples setting; in some examples utilizing defaults; in some examples utilizing presets; in some examples utilizing saved settings; in some examples utilizing templates; in some examples utilizing style sheets and/or styles; in some examples utilizing or adapting previous settings from the same or similar inputs; in some examples utilizing or adapting previous settings from similar types of inputs; etc. In some examples a controls interface 1250 detects the current state(s) of the respective controls, including any changes in a control, and outputs said state data to the CPU 1266 by means of the system bus 1260.

In some examples said TP device outputs one or a plurality of unprocessed and/or synthesized video/audio streams at various processing steps to use in setting various controls, or to use directly; in some examples said TP device is controlled to output a single selected and unprocessed input video from the various inputs received; in some examples said TP device is controlled to output a grid display of selected unprocessed input videos from some or all of the inputs received; in some examples said TP device is controlled to output a combination of a single selected and unprocessed input video that is displayed in a different size and style from a grid display of selected unprocessed input videos from some or all of the inputs received; in some examples said TP device is controlled to output a preview of a synthesized combination of input videos, along with dynamically altering said synthesis as varying controls are applied; in some examples said TP device is controlled to output a preview of a synthesized combination of input videos, along with the selected and unprocessed input videos from which the synthesis is performed, along with dynamically altering said synthesis as varying controls are applied to each individual input video or to the synthesized preview of combined input videos; etc. In some examples said TP device is controlled to save particular combinations of controls to apply said saved combinations automatically to control input sources; to control types of input sources individually; to control categories of input sources as a class of inputs; to control combinations of input sources as a group of multiple specific input sources, types of input sources, categories of input sources, classes of input sources, previously combined input sources, etc. In some examples said TP device may automatically perform input, format conversion, control, synthesis, output and display with manual control at any time to specify functions such as input selection(s), combination(s) desired, mixing controls, effects, output(s), display(s), etc.

Various processes in a mixed format TP device depend on video signals for synchronization such as in some examples switching or combining a plurality of inputs from a plurality of sources; in some examples for video mixing; in some examples for video effects; in some examples for video output(s); etc. The timer/sync generator 1255 in a TP device may in some examples be a video signal generator (VSG), in some examples a sync pulse generator (SPG), in some examples a test signal generator, in some examples a VITS (vertical interval test signal) inserter, or another known type of timer/sync generator. In some examples a timer/sync generator 1255 counts time intervals to generate tempo clock pulses 1255 that are employed to synchronize at the same timing in some examples the varying plurality of external inputs 1230 1231 1232 1233 that are received by means of network interfaces 1235 1236 1237 1238; in some examples one or a plurality of local user I/O inputs 1262 1261 or outputs 1262 1261; in some examples converting 1240; in some examples switching inputs 1246 1247; in some examples synthesis 1245 such as mixing 1248 and/or effects 1249; in some examples various locally stored inputs 1263 such as recordings; in some examples other inputs such as advertising, content, objects, music, audio, etc. as described elsewhere; in some examples during output 1252 1253 1254 1256; in some examples for other types of synchronization. In some examples such tempo clock pulses 1255 may be employed by the CPU 1265 1266, and/or by co-processors 1272 1273 for processing timing, in some examples for timing instructions, in some examples for interrupt instructions, or for other types of synchronization processes; and in some examples said CPU 1265 1266 and/or said co-processors 1272 1273 control components of the TD device such as in some examples external inputs 1230 1231 1232 1233; in some examples local user interface inputs 1262 1261; in some examples during mixing 1248, effects 1249 and overall synthesis 1245; in some examples stored inputs 1263; in some examples other inputs; in some examples during output 1252 1253 1254 1256; in some examples for other types of synchronization.

In some examples synthesis includes at least inputs/sync 1246; (optional) manual and/or automated designation of one or a plurality of combinations of inputs 1247; (optional) mixing 1248 said designated combinations 1247; adding (optional) effects 1249 to said designated combinations 1247; (optional) combination(s) of mixing 1248 and effects 1249 to said designated combinations 1247; and altering any of these combinations 1247, mixing 1248, effects 1249 at any step or stage by means of various automated and/or manual controls 1250. Said automated and/or controlled synthesis 1245 1246 1247 1248 1249 1250 begins with inputs/sync 1246 such as in some examples format conversion such as described in 1151 1152 1153 in FIG. 29, but at this step 1246 confirms and/or validates that the respective inputs 1230 1231 1232 1233 1262 as received and processed by the TP device 1235 1236 1237 1238 1239 1240 1241 1242 1243 are appropriately prepared and synchronized for TP device uses such as synthesis 1245 such as in some examples A/D or other format conversion 1240, in some examples timing sync 1255, in some examples other types of synchronization. In some examples inputs 1230 1231 1232 1233 are received by a TP device 1235, converted for use 1240, synthesized 1245 and controlled 1245 1250, then output 1252 with each frame stored in memory 1264, and the succession of processed and stored frames in memory 1264 output and displayed 1252 as a new synthesized video with both format 1253 and timing 1255 synchronized for display 1256 1257.

In some examples any of these inputs 1230 1231 1232 1233 and/or steps such as in some examples as received 1235, in some examples as converted for TP device use 1240, in some examples at various steps or stages of synthesis 1245, in some examples at various steps or stages of display 1252 may be displayed under automated and/or user control 1250 to a local user in some examples, to a remote user in some examples, or to an audience in some examples. In some examples a range of user controls 1250 and features may be utilized at various steps 1235 1240 1245 1252 such as changing the combination of inputs 1250 1246 1247, zooming in or out 1250 1256, changing the background 1250 1248, changing components of a background 1250 1248, inserting titles or captions 1250 1248 1249, inserting an advertisement(s) 1250 1248 1249, inserting content 1250 1248 1249, changing objects in the background 1250 1248 1249, etc.

In some examples mixing 1248 may be performed under automated and/or user control 1250 such as in some examples a video editing system 1250 1248 that includes two or a plurality of inputs 1230 1231 1232 1233 1262. In some examples an input is a background such as a place 1231 1246; in some examples an input is a local identity such as a user 1262 1246; in some examples an input is a remote identity such as an SPLS member 1230 in a focused connection 1232 1246; in some examples an input is a remotely stored advertisement 1231 1246; in some examples an input is a broadcast program 1233 1246; in some examples an input is a streaming media source 1233 1246; and in some examples another type of input may be used 1231 1246 as described elsewhere. In some examples mixing includes separating an input's 1246 foreground object(s) from its background as described elsewhere such as in FIG. 81 through 85. In some examples mixing 1248 combines these inputs by means of known video mixing technology (as described elsewhere) to synthesize and create a local display 1256 1257 of said remote identity 1230 1232 positioned appropriately in an optionally selected place 1231 with an optionally inserted advertisement 1231 positioned appropriately in the background 1231, as well as to simultaneously synthesize and create a remote display 1256 1235 1232 of said local user 1262 positioned appropriately in said place 1231 with said advertisement 1231 positioned appropriately in the background place 1231. In some examples mixing 1248 combines these inputs by means of known video mixing technology (as described elsewhere) to synthesize and create a local display 1256 1257 of said remote identity 1230 1232 positioned appropriately in an optionally selected broadcast program 1233 or streaming media 1233 with an optionally inserted advertisement 1231 positioned appropriately in the background 1231, as well as to simultaneously synthesize and create a remote display 1256 1235 1232 of said local user 1262 positioned appropriately in said place 1231 with said advertisement 1231 positioned appropriately in the broadcast program 1233 or streaming media 1233. In some examples other inputs 1246 1247 may be mixed 1248 into the new synthesis 1245 dynamically whether automatically or under user control 1250 with various interface controls 1250 such as in some examples designators 1247 to determine which input(s) is added, and in some examples sliders 1250 to control the relative strength of the added input 1246 so that it is an appropriate fit into the current mixed output 1248, to yield differently synthesized and created video output(s) 1252. In some examples a user may see that one input component 1246 such as the participant from a remote focused connection 1232 blends too much into the background so the user may select that designated input 1250 1247 and increase its intensity 1248 (such as by a gain slider in some examples, changing a color[s] in some examples, or altering one or a plurality of other attributes such as size or position in some examples) to readily increase its visibility in the mixed 1248 output 1252. In some examples this may be accomplished by simply varying the synthesis ratio 1248 between the designated inputs 1247 so that one or a plurality of inputs becomes more outstanding in the output 1252. In some examples other controls 1250 may be used to automatically and/or manually adjust attributes in real time one or a plurality of inputs 1246 1247 and/or the mixed 1248 output 1252; such as color differences in some examples, hue in some examples, tint in some examples, color(s) in some examples, transparency in some examples, and/or other attributes in other examples. In some examples it is possible for a TP device to utilize said mixing 1248 1250 to simultaneously create multiple new synthesized videos in real-time as described elsewhere such as in FIG. 33.

In some examples effects 1249 may be added under automated and/or user control 1250 such as in some examples changing the size of a dimension(s) of a designated input 1249 1246 1247 such as an overall size in some examples, a vertical dimension in some examples, a horizontal dimension in some examples, a cropping or zoom in some examples; in some examples changing the position(s) of one or a plurality of designated inputs 1249 1246 1247; in some examples changing the hue of one or a plurality of designated inputs 1249 1246 1247; in some examples changing the tint of one or a plurality of designated inputs 1249 1246 1247; in some examples changing the luminance of one or a plurality of designated inputs 1249 1246 1247; in some examples changing the gain of one or a plurality of designated inputs 1249 1246 1247; in some examples changing the transparency of one or a plurality of designated inputs 1249 1246 1247; in some examples changing the color difference of one or a plurality of designated inputs 1249 1246 1247; in some examples simultaneously changing multiple values or attributes of one or a plurality of designated inputs 1249 1246 1247; in some examples adding a border to one or a plurality of designated inputs 1249 1246 1247; in some examples altering one or a plurality of persons 1249 such as adding a beard in some examples, changing the hairstyle in some examples, changing hair color in some examples, adding glasses in some examples, changing the color of one or a plurality of clothing items in some examples, etc. In some examples it is possible for a TP device to utilize said effects 1249 1250 to simultaneously create multiple new synthesized videos in real-time as described elsewhere such as in FIG. 33. In some examples it is possible for a TP device to utilize both said mixing 1248 1250 and said effects 1249 1250 to simultaneously create multiple new synthesized videos in real-time as described elsewhere such as in FIG. 33.

While the TP device processing flow 1235 1240 1245 1252 1260 1261 1262 1263 1264 1265 1272 1277 has been described primarily in terms of video synthesis, in some examples each of these steps simultaneously processes audio with the respective video such that pictures and sound are appropriately synchronized during receiving 1235 in some examples, conversion 1240 in some examples, synthesis 1245 in some examples, control 1250 in some examples, output and display 1252 1256 1257 in some examples, and network communication of said output 1235 in some examples. In some examples the inputs 1246 are directly output 1252; in some examples the mixed 1248 combinations 1247 are output 1252; in some examples the mixed 1248 combinations 1247 with added effects 1249 are output 1252; in some examples the inputs 1246 with added effects 1249 are output 1252; in some examples other picture processing may be performed as directed by automated and/or manual controls 1250 then output 1252.

While the TP device processing flow 1235 1240 1245 1252 1260 1261 1262 1263 1264 1265 1272 1277 has been described primarily in terms of video synthesis, in some examples each of these steps separately processes audio from the respective video but then recombines video and audio during specific steps such as compositing in some examples, such that pictures and sound are appropriately synchronized during receiving 1235 in some examples, conversion 1240 in some examples, synthesis 1245 in some examples, control 1250 in some examples, output and display 1252 1256 1257 in some examples, and network communication of said output 1235 in some examples.

Output 1252 comprises components that in some examples includes media switch(es) 1254, in some examples includes (optional) format conversion 1253, in some examples includes one or a plurality of display processors 1256, in some examples includes one or a plurality of BOC's (Broadcast Output Components) 1256 which operate analogously to the output functions of a PC TV tuner card that includes two or more separate tuners on one card, and in some examples includes one or a plurality of displays 1257. In some examples a timer/sync generator 1255 is utilized to synchronize output 1252 1253 1254 as described elsewhere. In some examples one or a plurality of media switches 1254 routes a synthesized real-time video 1245 to a plurality of simultaneous uses such as in some examples a local display 1257; in some examples a simultaneous focused connection 1232 with one or a plurality of remote participants connected by means of a network interface 1235; in some examples a simultaneous focused connection with a plurality of remote IPTR 1232 1231 connected by means of one or a plurality of network interfaces 1235; in some examples output a local playback 1256 1257 and/or transmit a broadcast 1235 1233 of one or a plurality of recorded and/or live programs; in some examples simultaneously recording said synthesized video 1245 to local storage 1263 and/or to remote storage 1263; in some examples a simultaneous broadcast of said synthesized video 1245 to an audience by means of one or a plurality of network interfaces 1235 1236 1237 1238 1239; in some examples for other singular or simultaneous uses of said synthesized video 1245. In some examples one or a plurality of external TP devices (such as in some examples RCTP, in some examples AIDs/AODs, in some examples VTP's, in some examples other types of TP connections) may also provide said media switch 1254 with their synthesized output(s) 1245, and the plurality of uses of their synthesized video 1245 may be visible in some examples, or in some examples said media switch 1254 may provide routing of the external TP device's synthesized video 1245 but the distributed uses are not visible to the external TP device. In some examples of media switches 1254 one or a plurality of synthesized videos 1245 may simultaneously be input from one or a plurality of TP devices, and then be output for a plurality of purposes and connections that include in some examples real-time uses, in some examples recordings for asynchronous and/or on-demand uses at a different times, and in some examples be output for other simultaneous uses. In some examples said media switch(es) 1254 may provide built-in format conversion, and in some examples said media switch(es) 1254 may route one or a plurality of synthesized videos for separate (optional) format conversion 1253 as needed by each video. In some examples said media switch(es) 1254 may utilize timing signals 1255 in the event two or a plurality of inputs require synchronization. Therefore, in some examples said media switching 1254 is provided by one or a plurality of media switch(es) 1254 which in some examples has scalable capacity and intelligence, and in some examples combining multiple switching and format conversion functions into a TP device reduces lags and latencies, and in some examples providing multiple media switches within a TP device reduces lags and latencies.

In some examples said media switch 1254 includes one or a scalable plurality of parsers 1254, one or a scalable plurality of DMA (Direct Memory Access) engines 1254, and one or a scalable plurality of memory buffers that in some examples are components of the media switch 1254 and in some examples are in memory 1264. In some examples a media switch(es) includes explicit DMA engines 1254 such as in some examples one or a plurality of video DMA engines 1254; in some examples one or a plurality of audio DMA engines 1254; in some examples one or a plurality of event DMA engines 1254; in some examples one or a plurality of private and/or secret DMA engines 1254; in some examples one or a plurality of other types of DMA engines 1254. In logical sequence, the inputs to said media switch 1254 include synthesis 1245 in some examples; other inputs such as external IPTR or TP devices 1235 1240 1245 that may be passed through the TP device to the media switch with no processing in some examples, some processing in some examples, and a plurality of processing steps in some examples; and timing synchronization 1255 that may be utilized in some examples and ignored in some examples. In some examples a parser 1254 parses each input to determine its key components such as the start of all frames; in some examples a parser 1254 parses each input to associate it with periodic timed pulses 1255; in some examples a parser 1254 parses each input to identify and utilize a time code or other attribute that is part of said input. In some examples the parsing process divides each input into its component structure so that each component may be processed individually, and various types of component structure(s) and/or indicators are known and may be utilized by said parser. As an input stream is received by a parser 1254 it is parsed for its components such as each frame in some examples; in some examples when the parser finds the start of a component it directs that stream to a DMA engine 1254 which streams said input to a memory buffer location 1254 1264 until the next component is identified by said parser 1254 and streamed into its memory buffer location 1254 1264. In some examples the memory buffer location of each component is provided to the media switch's program logic 1254 via an interrupt mechanism such that the program logic knows where each memory buffer location starts and ends. In some examples the program logic 1254 stores accumulated memory buffers locations to generate a set of logical segments that is divided and packaged in various formats to correspond to each type of output required; in some examples the program logic constructs a focused connection stream 1232; in some examples the program logic constructs one or more types of PTR stream(s) 1231; in some examples the program logic constructs a digital television stream as a broadcast source 1233 and 971 in FIG. 32; in some examples the program logic constructs an analog television stream as a broadcast source 1233 and 971 in FIG. 32; in some examples the program logic constructs a streaming media source 1233 and 971 in FIG. 32; in some examples the program logic constructs a stream suitable for recording and archiving for later editing and/or playback; in some examples the program logic constructs a stream appropriate for another use. In each of these and other examples the program logic 1254 converts the set of stored accumulated memory buffers locations into specific instructions to construct each type of output needed from a specific input, such as in some examples constructing a packet appropriate for the Internet that contains an appropriate set of components in logical order plus ancillary control data. In some examples the program logic 1254 queues up one DMA input/output transfer cycle then clears those associated memory buffers which limits the program steps, DMA transfers and memory buffers needed in part because this is a circular event cycle in which the number of parallel DMA transfers for each input is minimized by clearing each cycle when it is completed. This media switch component 1254 in some examples decouples the CPUs 1265 1272 from performing one or a plurality of output routing, packaging and streaming steps.

In some examples one or a plurality of multiplexers 1254 may be used instead of a media switch(es) 1254 to route a synthesized real-time video 1245 to a plurality of simultaneous uses such as in some examples a local display 1257; in some examples a simultaneous focused connection 1232 with one remote participant communicated by means of a network interface 1235; in some examples a simultaneous focused connection with a plurality of remote IPTR 1232 1231 communicated by means of one or a plurality of network interfaces 1235; in some examples simultaneously recording said synthesized video at 1245 to local storage 1263 and/or to remote storage 1263; in some examples a simultaneous broadcast 1233 of said synthesized video 1245 to an audience by means of one or a plurality of network interfaces 1235; in some examples for other simultaneous uses of said synthesized video 1245. In some examples this means that a single synthesized video 1245 may simultaneously serve multiple purposes and connections that include both real-time uses and recordings for asynchronous and/or on-demand uses at a different time, and require multiplexer 1254 routing of a single synthesized video 1245, with or without format conversion 1253, for each simultaneous use.

In some examples each type of output 1245 1254 is passed to other TP device components 1254, or in some examples to other TP device components 1253 1256, that may in turn further process that output such as in some examples adjusting output image(s) in response to input and processing from a device's viewer detection sensor(s) 1262, in some examples encoding it, in some examples formatting it for a particular use, in some examples displaying it locally, etc. Therefore, a scalable media switch(s) 1254 receives one or a plurality of inputs 1235 1240 1245 and in some examples converts each input into one or a plurality of appropriately formatted outputs to fit a plurality of uses, or in some examples passes said outputs to successive TP device components 1256 1257 1235. In some examples a media switch 1254 or format conversion 1253 performs additional processing such as encoding using VBR (Variable Bit Rate) or in some examples another format. In some examples VBR reduces the data in successive frames by encoding movement and more complex segments at a higher bit rate than less complex segments, such as a blank wall requiring less space and bandwidth then a colorful garden on a windy day. Numerous formats may optionally be VBR encoded including in some examples MPEG-2 video; in some examples MPEG-4 Part 2 video; in some examples H.264 video; in some examples audio formats such as MP3, AAC, WMA, etc.; and in some examples other video and audio formats.

In some examples a single synthesized real-time video 1245 is created by in some examples designating inputs 1247, in some examples mixing 1248, in some examples adding effects 1249, in some examples previewing the output(s) in real time 1256 1257 and applying controls 1250, and in some examples other synthesis steps as described elsewhere. In some examples said synthesized video 1245 requires format conversion 1253 such as in some examples NTSC encoding 1253 to create a composite signal from component video picture signals. In some examples said synthesized video 1245 does not require format conversion 1253 and may be passed directly from synthesis 1245 to in some examples a media switch(es) 1254, in some examples to display processing 1256, in some examples to a network interface 1235, and in some examples to another use as described elsewhere. In some examples (optional) format conversion 1253 is performed automatically based on the type of use(s) or display(s) in use by each TP device 1140 in FIG. 29 such as in some examples to fit an SDI (Serial Digital Interface) interface as used in broadcasting; in some examples composite video; in some examples component video; in some examples to conform to a standard such as the various SMPTE (Society of Motion Picture and Television Engineers) standards; in some examples to conform to ITU-Recommendation BT.709 for high definition televisions with a 16:9 aspect ratio (widescreen); in some examples to conform to HDMI; in some examples to conform to specific pixel counts such as in various examples 640×480 (VGA), 800×600 (SVGA), 1024×768 (XGA), 1280×1024 (SXGA), 1600×1200 resolution (UXGA), 1400×1050 (SXGA+), 1280×720 (WXGA), 1600×768/750 (UWXGA), 1680×1050 (WSXGA+), 1920×1200 (WUXGA). 2560×1600 (WQXGA), 3280×2048 (WQSXGA), 480i (NTSC television), 576i (PAL television), 480p (720×480 progressive scan television), 576p (720×576 progressive scan television), 720p (1280×720 progressive scan high definition television), 1080i (1920×1080 high definition television), 1080p (1920×1080 progressive scan high definition television), and other pixel counts and display resolutions such as for various cell phones, e-tablets, e-pads, net books, etc.

In addition to formatting for displays (optional) format conversion 1253 may be performed in some examples for video compression to reduce bandwidth for transmission in some examples on one or a plurality of networks, in some examples for broadcast(s), in some examples for a cable television service, and some examples for a satellite television service, or in some examples for another type of bandwidth reduction need. In some examples (optional) compression 1253 is performed automatically based on the type of network, application, etc. that is being utilized such as in some examples H.261 (commonly used in videoconferencing, video telephony, etc.); in some examples MPEG-1 (commonly used in video CDs); in some examples H.262/MPEG-2 (commonly used in DVD video, Blu-Ray, digital video broadcasting, SVCD); in some examples H.263 (commonly used in videoconferencing, videotelephony, video on mobile phones [3GP]); in some examples MPEG-4 (commonly used on video on the Internet [DivX, Xvid, . . . ); in some examples H.264/MPEG-4 AVC (commonly used in Blu-Ray, digital video broadcasting, iPod video, HD DVD); in some examples VC-1 (the SMPTE 421M video standard); in some examples VBR as described elsewhere, and in some examples other types of video compression and/or standards.

In some examples one or a plurality of display processors components 1256 (also known as a GPU[s] or Graphics Processing Unit[s], which may also encompass a BOC[s] or Broadcast Output Component[s] that operates analogously to the output functions of a PC TV tuner card that includes two or more separate tuners on one card) receives said inputs and/or output(s) 1235 1240 1245 1254 1253 and utilizes a specialized processor that accelerates graphics rendering such as for displaying a plurality of simultaneous output streams in some examples, for 3-D rendering in some examples; for high definition video in some examples; for supporting multiple simultaneous displays in some examples; for 2-D acceleration in some examples; for GPU assisted video encoding or decoding in some examples; for adding overlays such as controls and icons to some displays in some examples; for specialized features such as resolution conversions, filter processing, color corrections, etc. in some examples; for encryption prior to transmission in some examples; or for other display-related functions. In some examples a display processor(s) is a separate component(s) in some examples such as a video card, a GPU, video BIOS, video memory, etc.; in some examples one or a plurality of display outputs include VGA (Video Graphics Array), DVI (Digital Visual Interface), HDMI (High Definition Multimedia Interface), composite video, component video, S-video, DisplayPort, etc. In some examples a display processor(s) is an integrated component such as on a motherboard in which a graphics chipset provides display processing, but may or may not have lower performance than a separate display processor(s) component. In some examples a plurality of display processors are utilized to display a single image or video stream; in some examples a plurality of display processors are utilized to display multiple video streams; in some examples one or a plurality of display processors are utilized as general purpose graphics processors that provide stream processing, which in some examples adds a GPU's floating-point computational capacity to a TP device's processing capacity 1266 1273.

In some examples a TP display 1257 visually displays any of the range of selected video such as in some examples video after synthesis 1245; in some examples video after mixing 1248; in some examples video after effects 1249; in some examples video after format conversion 1253; in some examples a direct display of a broadcast(s) received 1233, in some examples a received broadcast 1233 after conversion 1241; in some examples video and audio after any combination of synthesis 1245, mixing 1248, effects 1249, conversion 1253, etc.; in some examples one or a plurality of unprocessed inputs 1230 1231 1232 1233; in some examples one or a plurality of user I/O 1262; in some examples partially processed video during synthesis 1245; in some examples stored video/audio from local storage 1263 and/or remote storage 1263; in some examples other video data from any of a range of extensible sources. In some examples a local TP display device 1257 may be any form of display such as in some examples an LCD (Liquid Crystal Display); in some examples a plasma screen; in some examples a projector; in some examples any other form of display. In some examples a TP device's output 1252 is processed 1256 as described elsewhere, and output to one or a plurality of network interfaces 1235 1236 1237 1238 1239 for transmission over a network for remote display such as in some examples with SPLS members 1 through N 1230, in some examples with PTR 1 through N, in some examples with focused connections 1 through N 1232, in some examples with one or a plurality of broadcast sources 1233, in some examples with one or a plurality of TP devices, in some examples with one or a plurality of AIDs/AODs, in some examples with one or a plurality of RCTP devices, and in some examples with any of an extensible range of devices.

In some examples a display presents TP device output that in some examples includes a consistent TP interface as described elsewhere; in some examples includes video; in some examples includes audio; in some examples includes icons; in some examples includes 3-D; in some examples includes features for tactile interactions; in some examples includes haptic features; in some examples includes visual screens; in some examples includes e-paper; in some examples includes wearable displays such as headsets; in some examples includes portable wireless pads; in some examples includes analog monitors; in some examples include digital monitors; in some examples includes multiple simultaneous types of wired and wireless display devices; etc. In some examples display devices are interactive and provide TP input such as in some examples touch interface displays; in some examples haptic displays (which rely on the user's sense of touch by including motion, forces, vibrations, etc. as stimulation in some examples, content in some examples, interaction in some examples, feedback in some examples, means for input in some examples, and other interactive uses); in some examples a headset that includes one or two earpieces and a microphone for voice input; in some examples wearable devices such as a portable projector; in some examples projected interactive objects such as a projected keyboard; etc. In some examples displays include a CRT; in some examples a flat-panel display; in some examples an LED (Light Emitting Diode) display; in some examples a plasma display panel; in some examples an LCD (Liquid Crystal Display) display; in some examples an OLED (Organic Light Emitting Diode) display; in some examples a head-mounted display; in some examples a video projector display; in some examples an LCD projector display; in some examples a laser display (sometimes known as a laser projector display); in some examples a holographic display; in some examples an SED (Surface Conduction Electron Emitter Display) display; in some examples a 3-D display; in some examples an eidophor front projection display; in some examples a shadow mask CRT; in some examples an aperture grille CRT; in some examples a monochrome CRT; in some examples a DLP (Digital Light Processing) display; in some examples an LCoS (Liquid Crystal on Silicon) display; in some examples a VRD (Virtual Retinal Display) or RSD (Retinal Scan Display, used in some types of virtual reality); or in some examples another type of display.

In some examples of TP devices multiple displays are present; in some examples two or a plurality of displays are cloned so that each receives a duplicate signal of the same display; in some examples two or a plurality of displays share a single spanned display that is extended across the multiple displays with a result of one large space that is one contiguous area in which objects and components may be moved between (or in some examples shared between two or more of) the various displays. In some examples multiple display processor units (also known as GPU's or Graphics Processing Units) 1256 may be used to enable a larger number of displays to create one single unified display. In some examples of TP devices larger displays may be employed such as in some examples LCD (Liquid Crystal Display) displays; in some examples PDP (plasma) displays; in some examples DLP (Digital Light Processing) displays; in some examples SED (Surface Conduction Electron Emitter Display) displays; in some examples FED (Field Emission Display) displays; in some examples projectors of various types (such as for examples front projections and rear projections); in some examples LPD (Laser Phosphor Display) displays; and in some examples other types of large screen technology displays.

In some examples programs to be executed 1267 1268 1274 1275 by the CPU 1266 and/or by a co-processor(s) 1273 in some examples are stored in local storage 1263, in some examples are stored in remote storage 1263, in some examples are stored in ROM memory 1264, and in some examples are stored in another form of storage 1263 or memory 1264. As described elsewhere (such as in FIG. 29) the program(s), module(s), component(s), instructions, program data, user profile(s) data, IPTR data, etc. that enable operation of a TP device may be stored in local storage and/or remote storage and retrieved as needed to operate said TP device. Additionally, storage 1263 in FIG. 31 enables storage and retrieval of the automated settings and/or manual controls settings 1250 that are employed in some examples in one or a plurality of mixing steps 1248, in some examples in applying one or a plurality of effects 1249, in some examples in one or a plurality of format conversions 1240 1241 1242 1243 1253, in some examples in one or a plurality of uses of timing or sync signals 1255, in some examples in one or a plurality of displays 1256 1257, in some examples in one or a plurality of network communications 1235 1236 1237 1238 1239, in some examples in other stored settings and/or controls. These pre-set stored settings and/or controls settings may be in the form of video output types, video styles, configurations, templates, style sheets, etc. At predetermined steps, such as in some examples when inputs 1246 have been designated 1247 and output formats are known 1253 including their display(s) 1256 1257, said local storage 1263 and/or remote storage 1263 may be accessed to retrieve the appropriate automated settings and/or appropriate defaults controls settings 1250 so that the CPU 1265 1266 and/or co-processors 1272 1273 may operate properly to perform the respective operations 1248 1249 1240 1253 1255 1256 1235 etc. The local storage 1263 and/or remote storage 1263 may employ any fixed media such as hard disks, flash (semiconductor) memory, etc. and/or removable media such as recordable CD-R and CD-RW, DVD-R, magneto optical (MO) discs, etc. In some examples this enables a plurality of pre-set synthesis patterns to be stored as a network resource for a plurality of users to retrieve whenever needed, whether these are retrieved individually or a collection(s) is downloaded to local storage for local retrieval. As needed, one or a plurality of pre-set synthesis patterns may be immediately retrieved and applied such as in a one-touch operation, which in some examples enables prompt and immediate switches between different types of mixes 1248, in some examples different effects 1249, in some examples different display arrangement patterns 1256 1257 1262, in some examples any other pre-set and stored immediate transformations or component settings.

In some examples RAM memory 1264 is utilized as working memory by the CPU 1266 and/or by a co-processor(s) 1273 to store various program logic 1267 1274 in some examples; scheduled operations 1268 1275 in some examples; lists 1269 1276 in some examples; queues 1269 1276 in some examples; counters 1269 1276 in some examples; and data 1235 1240 1245 1252 in some examples as said processors execute various programs 1267 1268 1274 1275. In some examples RAM memory 1264 is utilized as working memory for storing various inputs 1230 1231 1232 1233 1262 as they are undergoing various TP device processes under program control such as in some examples conversion 1240, in some examples synthesis 1245 and in some examples output 1252.

In some examples a TP device includes considerable processing power as would be expected for devices that provide and support “digital presence” as described elsewhere. Just as a contemporary laptop with an advanced multi-core processor has more processing power than a previous generation's mainframe computer, in some examples said continuously advancing processing power includes one or a plurality of supervisor CPUs 1265 1266, and in some examples said processing includes one or a plurality of co-processors 1272 1273 that are selectable by the supervisor CPU(s) 1266. In some examples said co-processors 1272 are connected via a bus 1260 to the supervisor CPU 1266, with said co-processors including video co-processors in some examples, audio co-processors in some examples, and graphics co-processors (such as GPUs) in some examples. In some examples a supervisor memory 1264 is connected to the supervisor CPU 1266 directly, and in some examples connected via a bus 1260. In some examples one or a plurality of co-processor memories 1264 is connected to a co-processor(s) 1266 directly, and in some examples connected via a bus 1260. In some examples memory 1264 may be dynamically utilized as required as either or both supervisor CPU memory 1264 1265 1266, co-processor memory 1264 1272 1273, data processing memory 1264 1265 1266 1272 1273, media switching memory 1264 1254, or another memory use. In some examples a supervisor application 1267 selectively assigns video inputs 1235, format conversion 1240, synthesis 1245, outputs 1252, etc. to one or a plurality of co-processors 1273 and co-processors' applications 1274. In some examples a supervisor application 1267 includes processing scheduling 1268 with in some examples associated lists 1269, in some examples queues 1269, in some examples counters 1269, etc. In some examples a supervisor application 1267 includes co-processing scheduling 1268 1275 with in some examples associated co-processor lists 1269 1276, in some examples co-processor queues 1269 1276, in some examples co-processor counters 1269 1276, etc. In some examples a supervisor application 1267 provides instructions to one or a plurality of co-processors' 1273 applications 1274 that in some examples include associated lists 1276, in some examples include associated queues 1276, in some examples include associated counters 1276, etc. In some examples said supervisor memory 1264 stores segments of one or a plurality of video streams for assignment to a selected co-processor 1273 and/or a selected co-processor application(s) 1274. In some examples said supervisor processor 1266 or selected co-processor(s) 1273 performs selectively instructed processing of video inputs 1235, in some examples format conversion 1240, in some examples synthesis 1245, in some examples outputs 1252, etc. In some examples said memory 1264 stores segments of one or a plurality of video streams as processed by said supervisor processor 1266 or in some examples selected co-processor(s) 1273. In some examples as co-processors 1273 utilize application logic 1274 to complete each scheduled 1275 1276 step, said supervisor application 1267 dynamically updates said lists 1269, said queues 1269, said counters 1269, etc. producing a cycle in which said supervisor application logic 1267 dynamically re-schedules co-processors 1273 for appropriate subsequent TP processing steps 1235 1240 1245 1252. In some examples controls 1250 dynamically alter supervisor application 1267 instructions, schedule(s) 1268, lists 1269, queues 1269, counters 1269, etc. In some examples controls 1250 dynamically alter co-processor applications 1274 instructions, schedule(s) 1275, lists 1276, queues 1276, counters 1276, etc. In some examples automated controls such as from making new focused connections 1232, in some examples adding PTR to a focused connection 1231, in some examples displaying a selected broadcast 1233, or in some examples other user actions or TP device processing steps that dynamically alter supervisor application 1267 instructions, schedule(s) 1268, lists 1269, queues 1269, counters 1269, etc. In some examples automated controls such as from making new focused connections 1232, in some examples adding PTR to a focused connection 1231, in some examples displaying a selected broadcast 1233, or in some examples other user actions or TP device processing steps that dynamically alter co-processor applications 1274 instructions, schedule(s) 1275, lists 1276, queues 1276, counters 1276, etc. In some examples the number of co-processors 1273 is selected by the supervisor application 1267 in some examples, by the processing scheduler 1268 in some examples, or by other means in some examples. In some examples the number of video streams processed by each co-processor 1273 is selected by the supervisor application 1267 in some examples, by the processing scheduler 1268 in some examples, or by other means in some examples. In some examples the number and range of outputs 1252 processed by each co-processor 1273 is selected by the supervisor application 1267 in some examples, by the processing scheduler 1268 in some examples, or by other means in some examples.

TP device processing of broadcasts: In some examples it is an object of a Teleportal device to provide direct access to a converged digital environment with a single digital device and user interface. In some examples Teleportals comprise electronic devices under user control that may be used to watch one or a plurality of current broadcasts from various television, radio, Internet, Teleportals and other sources 971 on one or a plurality of Teleportals 974 973; and in some examples Teleportals may be used to record one or a plurality of broadcasts for later viewing; and in some examples Teleportals may be used to blend current and recorded broadcasts into synthesized constructs and communications as described elsewhere; and in some examples Teleportals may be used to communicate interactively with one or a plurality of current or recorded broadcasts and/or syntheses to other viewers; and in some examples Teleportals may be used for other uses of broadcasts as described herein and elsewhere. In addition, a Teleportal device may be used for other functions simultaneously while watching one or a plurality of broadcasts. Therefore, in some examples it is an object of a Teleportal device to reduce the need for one or a plurality of separate television sets; in some examples it is an object of a Teleportal device to reduce the need for one or a plurality of separate free broadcast and/or paid subscription services (such as cable or satellite television); and/or in some examples it is an object of a Teleportal device to reduce the need for one or a plurality of set-top boxes to provide separate decoding and use of broadcast sources.

FIG. 32, “TP Device Processing of Broadcasts,” provides some examples in which broadcast sources 971 may be watched and/or listened to on Teleportal devices or used by Teleportal devices, making a TP device a substitute for the combination of a television set, a set-top box and/or a subscription broadcast service, plus providing other Teleportal functions as described elsewhere such as recording in some examples, playback in some examples, broadcasting in some examples, etc. In some examples broadcast sources 971 include cable television (herein TV) 971; in some examples satellite TV 971; in some examples over-the-air TV 971; in some examples IPTV 971 (Internet Protocol Television); in some examples TPTV 971 973 (Teleportal Television broadcasting) such as from other TP devices or users; in some examples Internet Radio 971 (also known as web radio); in some examples streaming media 971 (including short videos, webcasts, etc.) received from a telecommunications network; in some examples Web TV 971 or Internet TV 971; in some examples other types of broadcast sources 971 and broadcasts 971. In some examples broadcast sources 971 973 may be located at any program or broadcast distribution facility 971 973; in some examples a cable system head end 971 973; in some examples a satellite broadcast distribution facility 971 973; in some examples a data center containing media servers 971 973; in some examples an Internet hosting service 971 973; in some examples a “cloud” service 971 973; in some examples an individual's Teleportal device(s) 973; or in some examples any suitable broadcast distribution device or facility. In some examples a “local broadcast source” includes a local device source as described elsewhere such as in some examples a DVD player; in some examples a CD player; in some examples a Blu-ray player; in some examples a VCR; in some examples a directly connected digital camera; in some examples a directly connected camcorder; in some examples other types of media sources and/or players. In some examples remote broadcast sources 971 973 are received over one or a plurality of networks 972, while in some examples local broadcast sources include directly connected players and resources.

Watching, and/or listening, and/or using these may be accomplished in a TP device 974 by utilizing a subset of TP device components described in FIG. 31 and elsewhere. In some examples user control of said TP device 974 is performed by utilizing various user I/O devices 994 as described elsewhere, such as in some examples one or a plurality of remote controls 994; in some examples said TP device 974 is shared 995 and part or all of the TP device's functions are controlled by the remote user who is sharing it 995 and is therefore able to use it to watch broadcasts from a remote location; in some examples said TP device 974 is remotely controlled 995 and part or all of the TP device's functions are controlled by the remote user who is controlling it 995 and is therefore able to use it to watch broadcasts from a remote location; in some examples user control 994 995 is exercised by signals 994 995 that are received 997, processed 997 and utilized to control 997 982 976 said TP device's features and functions. In some examples TP device components include network interfaces 977; in some examples (optional) input tuner/format conversion 979; in some examples synthesis 981; in some examples controls 982 (such as in some examples switching a broadcast source 982 such as in some examples between a set top cable TV box and online IPTV; in some examples viewing one or more program guides 982; in some examples changing a television channel 982 for viewing the new channel; in some examples controlling the recording of a current or future broadcast 982; in some examples controlling the recording of a current communication session 982; in some examples using a current or recorded broadcast as input to synthesis 982; in some examples playing back a recording 982; or in some examples other controllable broadcast or recording/playback functions 982); in some examples (optional) output format conversion 985; in some examples a BOC 986 (Broadcast Output Component); in some examples display processing 987; in some examples playing a recording 989 in part or all of a TP device's display; in some examples playing a current broadcast 990 in part or all of a TP device's display; in some examples playing a processed synthesis 987 991 between a current broadcast or a recorded broadcast and other video and audio components; in some examples communicating, broadcasting or sharing said recording(s), broadcast(s) and synthesis(es) via a network 977 973; or in some examples performing other functions as described elsewhere.

In some examples a TP device includes user control 996 as described elsewhere that may receive signals from user I/O devices such as in some examples a keyboard 994; in some examples a keypad 994; in some examples a touchscreen 994; in some examples a mouse 994; in some examples a microphone and speaker for voice command interactions 994; in some examples one or a plurality of remote controls 994 of varying types and configurations; and in some examples other types of direct user controls 994. In some examples a device 974 may be shared 995 and the remote user(s) 995 who is sharing said device 974 provides user control 996 as described elsewhere; and in some examples a device 974 may be under remote control 995 and the remote user(s) 995 who is sharing said device 974 provides user control 996 as described elsewhere. Said user control 996 includes receiving said control signal(s) 994 995 997; processing 997 said received signal(s) as described in FIG. 35 and elsewhere; then controlling the appropriate function 982 976 or component 976 982 of said TP device 974. In some examples said received 997 and processed signals 997 are selectively transmitted to the TP device component 982 976 986 which in some examples controls functions such as choosing between various broadcast sources 971; in some examples displaying one or a plurality of interactive program guides 982; in some examples choosing a particular channel to watch 982; in some examples choosing a current broadcast 982 990 to watch; in some examples recording a particular broadcast 982 either currently or on a specific day and time; in some examples utilizing a current broadcast in synthesized communications 981; in some examples utilizing a recorded broadcast in synthesized communications 981; in some examples playing back a recorded broadcast 982 989 to watch it; in some examples playing back recordings 982 989 at scheduled dates and times and providing that as a TPTV (Teleportal Television) schedule for access by others 973; or in some examples performing another controllable function 982.

In the examples each step and its automated control and/or user control are known and will not be described in detail herein. In some examples said received broadcast is comprised of a broadcast stream (which may be in a multitude of formats such as in some examples NTSC [National Television Standards Committee], in some examples PAL [Phase Alternate Line], in some examples DBS [Digital Broadcast Services], in some examples DSS [Digital Satellite System], in some examples ATSC [Advanced Television Standards Committee], in some examples MPEG [Moving Pictures Experts Group], in some examples MPEG2 [MPEG2 Transport], or in some examples other known broadcast or streaming formats) and said (optional) tuner/format conversion 978 979 may disassemble said broadcast stream(s) to find programs within it and then demodulate and decode said broadcast stream according to each kind of format received. In some examples this may include an IF (Intermediate Frequency) demodulator that demodulates a TV signal at an intermediate frequency; in some examples this may include an A/D converter that may convert a TV signal into an analog or a digital signal; in some examples this may include a VSB (Vestigal Side Band) demodulator/decoder; in some examples a video decoder and an analog decoder respectively decode video and audio signals; in some examples a parser parses the stream to extract the important video and/or audio events (such as the start of frames, the start of sequence headers, etc. that device logic uses for functions such as in some examples playback, in some examples fast-forward, in some examples slow play, in some examples pause, in some examples reverse, in some examples fast-reverse, in some examples slow reverse, in some examples indexing, in some examples stop, or in some examples other functions); and/or in some examples other known types of decoder, converter or demodulator may be employed. Therefore, in some examples a sequence of two or a plurality of demodulators/decoders may be employed (for example, an ATSC signal may be converted into digital data by means of an IF demodulator, an A/D converter and a VSB demodulator/decoder; and for another example, an NTSC signal may be converted by means of a video decoder and an audio decoder), whereby said tuner/(optional) format conversion 979 tunes to a particular program within said broadcast sources 971 973, if needed provides appropriate format conversion 979, demodulation 979, decoding 979, parses said selected stream 979, and provides said appropriately formatted and parsed stream to the rest of the TP device.

In some examples after broadcast sources 971 973 are received 977 format conversion 979 is unnecessary, and the main controls employed 982 are to select a particular broadcast and pass it directly to output 984 985 986 to be watched 988 990. In some examples after broadcast sources 971 973 are received 977 format conversion 979 is performed, and the main controls employed 982 are to select a particular broadcast and pass it directly to output 984 985 986 to be watched 988 990. In some examples after broadcast sources 971 973 are received 977 and (optional) format conversion 979 is performed, the main controls employed 982 are to select a particular broadcast and pass it to the synthesis/controls functions 980 981 982 (as described elsewhere) in some examples for recording 981 982 (as described elsewhere); in some examples for synthesis 981 982 (as described elsewhere); in some examples to utilize other features 981 982 (as described elsewhere). In some examples output 984 includes (optional) format conversion 985 and said (optional) format conversion 985 may include encoding video 985 986 987 such as in some examples encoding video to display it 988 989 990 991 977 as described elsewhere; in some examples encoding a television signal 985 986 987 to display on a television; in some examples to encode video 985 986 987 such as for streaming 977 to fit a remote use or system. In some examples output 984 includes (optional) format conversion 985 and said (optional) format conversion 985 may include formatting audio signals for outputting audio in some examples to a speaker(s) 988; in some examples to an audio amplifier 988; in some examples to a home theater system 988; in some examples to a professional audio system 988; in some examples to a component of media 988 989 990 991 977; or in some examples to another form of audio playback 988. In some examples output 984 includes (optional) format conversion 985 and said (optional) format conversion 985 may include encoding video and audio such as in some examples to display it as a processed synthesis 987 991 as described elsewhere; in some examples encoding a television signal to display on a television; in some examples to encode video 985 986 987 such as for streaming 977 to fit a remote use or system.

Said functions and choices may be controlled in some examples by one or a plurality of users by means of user I/O devices 994; in some examples by one or a plurality of remote controls 994; in some examples a device 974 may be shared 995 and the remote user(s) 995 provides user control 996; and in some examples a device 974 may be under remote control 995 and the remote user(s) 995 provides user control 996. As an example if a user turns the volume up or down by using a remote control 994 996 997 the control function 982 adjusts the output of the audio function.

The above may be extended and expanded by data carried in the VBI (Vertical Blanking Interval) of analog television channels, or in a digital data track of digital television channels (a digital channel may include separate video, audio, VBI, program guide, and/or conditional access information as separate bitstreams, multiplexed into a composite stream that is modulated on a carrier signal; for example, in some examples digital channels transport VBI data to support analog video features, and in some examples a digital channel may provide additional digital data for other purposes). In some examples said additional data includes program associated data such as in some examples subtitles; in some examples text tracks; in some examples timecode; in some examples teletext; in some examples additional languages; in some examples additional video formats; in some examples music information tracks; in some examples additional data. In some examples said data includes other types and uses of additional data such as in some examples to distribute an interactive program guide(s); in some examples to download context-relevant supplemental content; in some examples to distribute advertising; in some examples to assist in providing meta-data enhanced programming; in some examples to assist in providing means for multimedia personalization; in some examples to assist in linking viewers with advertisers; in some examples to provide caption data; and/or in some examples to perform other data and assist with other functions. In some examples it is optional whether or not to play back or use all or any subset of said additional data when playing back or using said broadcast streams or programs that contain said additional data (whether in some examples encoded in the VBI, in some examples encoded in digital data track[s], in some examples provided by alternate means, or in some examples provided by additional means).

In some examples said additional data may be included according to standards such as in an NTSC signal utilizing the NABTS [North American Broadcast Teletext Standard]; in some examples according to FCC mandates for CC [Closed Caption] or EDS [Extended Data Services]; in some examples other standards or practices may be followed such as an MPEG2 private data channel. In some examples said additional data is not limited by standard means for encoding and decoding said data such as in some examples by modulation into lines of the VBI, and in some examples by a digital television multiplex signal that includes a private channel; other appropriate and known ways may be used as well whether as alternates or additions to said standard means and in some examples said additional data may be directly communicated over a cable modem, in some examples may be communicated over a cellular telephone modem, in some examples may be communicated by a server over one or a plurality of networks, and in some examples any mechanism(s) that can transmit and receive digital information may be employed.

In some examples output 984 includes encoding and including various kinds of additional data 985 986 987 provided by the remainder of a TP device as described in this figure and elsewhere, such that said additional data is included in the output signal 984 988 990 991 977; and in some examples when said output is played back in a subsequent device's input said additional information may be used in various ways described herein and elsewhere (in some examples said additional data may include information such as the original source of a copyrighted program that has been used in synthesis and output; in some examples the date a synthesis was created and output; in some examples program title and description information for display in an electronic program guide; or in some examples other data included for other purposes and uses). Said output 984 may in some examples add data to a broadcast or a communication that goes beyond what is normally considered video and/or audio data.

One characteristic of TP devices is processing one or a plurality of simultaneous connections as described elsewhere. FIG. 33, “TP Device Processing—Multiple/Parallel,” illustrates some examples of simultaneous processing of said connections in one device 1311 by means of a scalable plurality of simultaneous processes illustrated in FIG. 33. It also illustrates some examples of processing that is virtually integrated between two or a plurality of devices 1311 by means of a scalable plurality of simultaneous processes. In some examples simultaneous sources 1301 1301a,b,c . . . n that are processed include local I/O 1301, SPLS 1301, PTR 1301, focused connections 1301, broadcasts, and other sources as described elsewhere. In some examples said simultaneous sources 1301 1301a,b,c . . . n are received by simultaneous inputs 1302 1302a,b,c . . . n such as in some examples a network interface(s) 1303 as described elsewhere that includes in some examples simultaneous format conversion 1304 as described elsewhere. In some examples said source(s) 1301 1301a,b,c . . . n inputs 1302 1302a,b,c . . . n are simultaneously synthesized 1305 1305a,b,c . . . n by means such as in some examples designating inputs or channels 1306 as described elsewhere, in some examples mixing 1307 as described elsewhere, in some examples adding effects 1308 as described elsewhere, with (optional) user controls 1312 as described elsewhere. In some examples said simultaneous syntheses 1305 1305a,b,c . . . n are simultaneously output 1309 1309a,b,c . . . n by means such as outputs 1310 as described elsewhere, with simultaneous windows in a local device's displays 1314 1314a,b,c . . . n (that include audio as selected by a user), and/or with simultaneous windows in a remote device's displays 1314 1314a,b,c . . . n (that include audio as selected by a user), and/or simultaneous local and/or remote displays 1314 (that include audio as selected by a user) such as in some examples local display 1314, in some examples remote focused connections 1314, in some examples a stored recording(s) 1314, in some examples a broadcast program(s) 1314, and in some examples other outputs 1314 as described elsewhere.

In some examples inputs 1302 1302a,b,c . . . n 1303 includes for each simultaneously received source 1301 1301a,b,c . . . n that requires it, simultaneously performing format conversion 1304 as described elsewhere. In some examples automated controls 1312 and/or manual controls 1312 may be applied so that each individual format conversion 1304 operates in accordance with the settings of said controls 1312 so that each control setting corresponds to the appropriate source(s) 1301a,b,c . . . n as described elsewhere.

In some examples synthesis 1305 1305a,b,c . . . n includes for each simultaneously received source 1301 1301a,b,c . . . n that does not require format conversion 1304, and for each simultaneously format converted source 1304; in some examples automatically designating the appropriate sources 1306 for a specific synthesis 1305 1307 1308 and/or output 1309; and in some examples manually designating the appropriate sources 1306 for a specific synthesis 1305 1307 1308 and output 1309; and in some examples both automatically and/or manually designating the appropriate sources 1306 for a specific synthesis 1305 1307 1308 and output 1309. In some examples automated controls 1312 and/or manual controls 1312 may be applied so that each individual synthesis 1305 1305a,b,c . . . n 1306 1307 1308 operates in accordance with the settings of said controls 1312 so that each control setting corresponds in some examples to the appropriate synthesis 1305 1305a,b,c . . . n as described elsewhere; and in some examples to each synthesis step 1306 1307 1308 as described elsewhere. In some examples mixing 1307 includes automatically mixing 1307 designated sources 1306 as described elsewhere; and in some examples manually mixing 1307 designated sources 1306 as described elsewhere; and in some examples both automatically and manually mixing 1307 designated sources 1306 as described elsewhere. In some examples automated controls 1312 and/or manual controls 1312 may be applied so that each individual mixing 1307 of each set of designated sources 1306 operates in accordance with the settings of said controls 1312 as described elsewhere; and in some examples to each mixing step 1307 as described elsewhere. In some examples adding one or a plurality of effects 1308 includes automatically adding said effect(s) as described elsewhere; and in some examples manually adding said effect(s) as described elsewhere; and in some examples both automatically and manually adding said effect(s) as described elsewhere. In some examples automated controls 1312 and/or manual controls 1312 may be applied so that each individual addition of one or a plurality of effects 1308 operates in accordance with the settings of said controls 1312 as described elsewhere; and in some examples to each step in the addition of one or a plurality of effects 1308 as described elsewhere.

In some examples output 1309 1309a,b,c . . . n includes for each simultaneously received source 1301 1301a,b,c . . . n that does not require synthesis 1305 1305a,b,c . . . n, and for each simultaneously synthesized 1305 1305a,b,c . . . n set of designated sources 1306; in some examples automatically outputting the appropriate one or a plurality of outputs 1309 1309a,b,c . . . n 1310 as described elsewhere, and in some examples manually designating the appropriate one or a plurality of outputs 1309 1309a,b,c . . . n 1310 as described elsewhere, and in some examples both automatically and manually outputting the appropriate one or a plurality of outputs 1309 1309a,b,c . . . n 1310 as described elsewhere. In some examples automated controls 1312 and/or manual controls 1312 may be applied so that each individual output 1309 1309a,b,c . . . n 1310 operates in accordance with the settings of said controls 1312 so that each control setting corresponds in some examples to the appropriate output 1309 1309a,b,c . . . n 1310 as described elsewhere; and in some examples to each output step 1309 1309a,b,c . . . n 1310 as described elsewhere.

In some examples a plurality of local and remote TP devices provide said simultaneous processing and/or output (such as in some cases by remote control, in some cases by a shared device, in some cases by other means, etc.) as described elsewhere such as in some examples FIG. 34 “Local and Distributed TP Processing Locations,” FIG. 73 “Example Presence Architecture,” FIG. 82 “TP Configurations for Presence at a Place(s),” FIG. 85 “TP Interacting Group(s) at Event(s) or Place(s),” and elsewhere. In some examples a local device may provide processing as described elsewhere such as in some examples that are in FIG. 29 through FIG. 33. In some examples a receiver's device may provide said processing as described elsewhere; in some examples a network resource device may provide said processing as described elsewhere; and in some examples a plurality of local and remote devices perform said simultaneous processing at a plurality of locations by a plurality of devices which each perform some or all of said simultaneous processing as described elsewhere.

Local and distributed TP device processing locations: Turning now to FIG. 34, “TP Local and Distributed TP Device Processing Locations,” in some examples one option is a TP device 1 1280 that provides processing as described elsewhere such as in some examples one or a plurality of sources are received 1281 1282 from remote sources like another TP device 1288 1281 1282, in some examples from an AID/AOD 1298 1281 1282, in some examples from optional network processing 1294 1281 1282, in some examples from optional remote sources 1285 1281 1282, in some examples from a local source 1282 like a camera or microphone, and in some examples from one or a plurality of other input sources 1281 1282. In some examples device reception 1281 of one or a plurality of sources 1288 1298 1294 1285 includes decoding 1281, in some examples decompression 1295, in some examples format conversion 1281 or another reception process as described elsewhere 1281. In some examples device synthesis 1283 is performed as described elsewhere, in some examples one or a plurality of foreground/background separations 1283 and/or background replacements is performed 1283, in some examples one or more sources 1281 1282 are “locked” as described elsewhere so their background may not be replaced; in some examples one or a plurality of subsystems 1283 are run as described elsewhere. In some examples one or a plurality of output(s) 1284 are displayed locally 1284 1281. In some examples one or a plurality of device output(s) 1284 are encoded for transmission 1281, in some examples compressed for transmission 1281, in some examples “locked” 1281 as described elsewhere prior to transmission, and in some examples streamed 1281 or transmitted 1281. In some examples synthesis 1283 and/or subsystems 1283 reflect(s) a user's profile 1299, in some examples a user's manual settings 1283, in some examples a different user's/tool's/source's settings 1288 1285 including background replacement(s) 1283 which in some examples includes a remote place 1285 1288 1294, in some examples includes content such as tools or resources 1285 1288 1294, in some examples includes advertisements 1285 1288 1294, or in some examples include any combination of complete or partial background replacement(s) 1283 that may be different for one participant 1280 from one or a plurality of other participants 1288 1298 so that it is possible that the participants may be together digitally while their backgrounds appear to be different enough that each sees their shared presence as if they were in a different “digital place.” In some examples one or a plurality of advertisements displayed in said synthesis 1283 fit a participant's Paywall 1299 so it earns money for one or a plurality of participants, as described elsewhere.

From a network view two or a plurality of TP devices 1280 1288 1285 1298 1299 1294 are attached to one or a plurality of networks 1286 in some examples a Teleportal Network 1286, in some examples an IP network 1286 such as the Internet, in some examples a LAN (Local Area Network) 1286, in some examples a WAN (Wide Area Network) 1286, in some examples a PSTN 1286 such as a Public Switched Telephone Network, in some examples a cellular network 1286, in some examples another type of network 1286 such as a cable television network that is configured to provide IP and VOIP telephone, in some examples a plurality of disparate networks 1286.

In some examples a second or a plurality of TP devices 2 through N 1288 are attached to said network(s) 1286 and provide processing as described elsewhere such as in some examples one or a plurality of sources are received 1289 1290 from remote sources like another TP device 1280 1289 1290, in some examples from optional network processing 1294 1289 1290, in some examples from optional remote sources 1285 1289 1290, in some examples from a local source 1289 like a camera or microphone, and in some examples from one or a plurality of other input sources 1289 1290. In some examples device reception 1289 from one or a plurality of sources 1280 1298 1294 1285 includes decoding 1289, in some examples decompression 1295, in some examples format conversion 1289 or another reception process as described elsewhere 1289. In some examples device synthesis 1291 is performed as described elsewhere, in some examples one or a plurality of foreground/background separations 1291 and/or background replacements is performed 1291, in some examples one or more sources 1289 1290 are “locked” as described elsewhere so their background may not be replaced; in some examples one or a plurality of subsystems 1291 are run as described elsewhere. In some examples one or a plurality of output(s) 1292 are displayed locally 1292 1289. In some examples one or a plurality of device output(s) 1292 are encoded for transmission 1289, in some examples compressed for transmission 1289, in some examples “locked” 1289 as described elsewhere prior to transmission, and in some examples streamed 1289 or transmitted 1289. In some examples synthesis 1291 and/or subsystems 1291 reflect(s) a user's profile 1299, in some examples a user's manual settings 1291, in some examples a different user's/tool's/source's settings 1280 1285 including background replacement(s) 1291 which in some examples includes a remote place 1285 1280 1294, in some examples includes content such as tools or resources 1285 1280 1294, in some examples includes advertisements 1285 1280 1294, or in some examples include any combination of complete or partial background replacement(s) 1291 that may be different for one participant 1288 from one or a plurality of other participants 1280 1298 so that it is possible that the participants may be together digitally while their backgrounds appear to be different enough that each sees their shared presence as if they were in a different “digital place.” In some examples one or a plurality of advertisements displayed in said device synthesis 1291 fit a participant's Paywall 1299 so it earns money for one or a plurality of participants, as described elsewhere.

In some examples network processing 1294 is another option wherein said processing 1294 is performed by a server, service, application, etc. accessible over one network 1286 or a plurality of disparate networks 1286. In some examples hardware or technology reasons for this include a device that is resource limited such as an AID/AOD 1298; in some examples a user may own or have access to device that may be utilized by remote control 1294 (such as in some examples an LTP, in some examples an RTP, in some examples an MTP, in some examples a subsidiary device as described elsewhere, etc.); in some examples more advanced processing applications, features or processing capabilities may be desired then a local device can perform; etc. In some examples network processing 1294 may be performed for business or other reasons such as in some examples to insert advertising in the background 1294 1299 1285; in some examples to provide the same virtual location and content for all participants at an event 1285 1294 1299; in some examples to provide a different background, content and/r advertisements for each participant at an event 1280 1288 1285 1294 1299; in some examples to substitute an altered reality 1294 for a participant 1280 1288 with or without the participant's knowledge as described elsewhere; in some examples to provide additional processing 1294 as a free service or as a paid service; etc.

In any of these or other examples network processing 1294 is attached to said network(s) 1286 and provides processing as described elsewhere. In some examples of network processing 1294 a stream is received 1295 or intercepted 1295 such as in some examples from a device 1280 1288 1298 and/or a remote source 1285; in some examples one or a plurality of sources are received 1295 1296 from remote sources like a device 1280 1288 1285 1298, in some examples from another optional source that provides network processing 1294, in some examples from optional remote sources 1285 1289, and in some examples from one or a plurality of other input sources 1295 1296. In some examples network processing reception 1295 from one or a plurality of sources 1280 1288 1298 1285 includes decoding 1295, in some examples decompression 1295, in some examples format conversion 1295, or in some examples another reception process as described elsewhere 1295. In some examples network processing synthesis 1297 is performed as described elsewhere, in some examples one or a plurality of foreground/background separations 1297 and/or background replacements is performed 1297, in some examples one or more sources 1295 1296 are “locked” as described elsewhere so their background may not be replaced; in some examples one or a plurality of subsystems 1297 are run as described elsewhere. In some examples one or a plurality of network processing output(s) 1300 are encoded for transmission 1300, in some examples compressed for transmission 1300, in some examples “locked” 1300 as described elsewhere prior to transmission, and in some examples streamed 1300 or transmitted 1300. In some examples synthesis 1297 and/or subsystems 1297 reflect(s) a user's profile 1299, in some examples a user's manual settings 1297, in some examples a different user's/tool's/source's settings 1280 1288 1298 1285 including background replacement(s) 1297 which in some examples includes a remote place 1285 1280 1288, in some examples includes content such as tools or resources 1285 1280 1288, in some examples includes advertisements 1285 1280 1288 1299, or in some examples include any combination of complete or partial background replacement(s) 1297 that may be the same for all participants 1280 1288 1298; or in some examples complete or partial background replacement(s) 1297 may be different for one participant 1280 from one or a plurality of other participants 1288 1298 so that it is possible that the participants may be together digitally while their “digital place” and/or other parts of their background(s) appear to be different enough that they each appear to be in a different “digital place(s).” In some examples one or a plurality of advertisements displayed in said network processing synthesis 1297 fit one or a plurality of participants' Paywall(s) 1299 so said Paywall(s) earn money for one or a plurality of participants, as described elsewhere.

Device(s) commands entry: Turning now to FIG. 35, “Device(s) Commands Entry,” this illustrates some examples of part of the process of entering commands into TP devices. In some examples device commands entry starts with a device that is in an on state 1320 and has one or a plurality of processes that are in a waiting state ready to receive a command(s) 1320. In some examples this includes one or a plurality of user I/O device(s) 1321 and/or user I/O interface(s) 1321 that are on and ready to transmit or execute a command(s) 1321.

In some examples said user I/O device(s) 1321 and/or user I/O interface(s) 1321 are on and said device 1321 is on and ready to receive a command(s) 1320. In some examples a user I/O device(s) 1321 may be turned off 1322, and/or in some examples a user I/O interface(s) 1321 may be turned off 1322, in which case said user I/O device(s) 1321 and/or user I/O interface(s) 1321 must first be turned on at the device level 1320. When turned on, this begins for each command 1323 by entering a command with a user I/O device or peripheral, and determining the type of command it is by determining the type of user I/O device that originates said command 1324 1325 1326 1327 1328, and the command issued 1324 1325 1326 1327 1328. In some examples said user I/O device(s) 1321 and/or user I/O interface(s) 1321 is a pointing device 1324 by which a user inputs spatial (in some examples including multi-dimensional) data generally indicated by physical gestures that are paralleled on a screen by visual changes such as moving a visible pointer (including a cursor); in some examples said pointing device 1324 is a mouse 1324; in some examples a pointing device is a trackball 1324; in some examples a pointing device is a joystick 1324; in some examples a pointing device is a pointing nub 1324 (a pressure sensitive small knob such as those embedded in the center of a laptop keyboard); in some examples a pointing device is a stylus 1324 (a pen-like device such as used on a graphics tablet); or in some examples is another type of pointing device 1324.

In some examples said user I/O device(s) 1321 and/or user I/O interface(s) 1321 is a voice interface 1325 device by which a user inputs voice or speech commands to control a device; in some examples said voice control of a device includes a wired microphone(s) 1325; in some examples said voice control of a device includes a wireless microphone(s) 1325; in some examples said voice control of a device includes an audio speaker(s) to provide audio feedback 1325; in some examples said voice control 1325 affects part of a device but not all of the device such as voice control over voicemail, or such as a voice-controlled web browser; in some examples said voice interface 1325 is used to control another interface device such as a remote control 1327 that in turn turns said voice controls into commands that are sent to control the device.

In some examples said user I/O device(s) 1321 and/or user I/O interface(s) 1321 is a touch interface 1326 device by which a user touches a device's display with in some examples one finger 1326, in some examples two or more fingers 1326 (such as a “swipe”), in some examples a hand 1326, in some examples an object 1326 (such as using a stylus on a graphics tablet), in some examples other means or combinations. In some examples a touch interface is a touch screen 1326 that includes part of or all of a device's display(s); in some examples a touch interface is a touchpad 1326 that is a small stationary surface used for touch control such as for many laptop computers; in some examples a touch interface is a graphics tablet 1326 that is usually controlled with a pen or a stylus; or in some examples another type of touch interface 1326.

In some examples said user I/O device(s) 1321 and/or user I/O interface(s) 1321 is a remote control 1327 (as described in more detail in FIGS. 36 and 37) by which the user operates a TP device wirelessly from a close line-of-sight distance using a handheld controller, which is also known by names such as a remote, a controller, a changer, etc. Various types of remote controls are typically used to control electronic devices such as televisions, stereo systems, home theater systems, DVD player/recorders, VCR players/recorders, etc., and may also be used to control some functions of PCs (such as in some examples a PC's media functions). In some examples a “universal remote control” emulates and replaces the individual remote controls from multiple electronic devices by being able to transmit the commands from multiple brands and models to control numerous electronic devices. In some examples a remote control 1327 includes a touchscreen whose interface provides graphical means for representing functions or buttons virtually (such as a virtual keyboard for text input), for displaying virtual buttons or controls, for including feedback from a device, for showing which device is being controlled (where a TP device uses remote control of other devices), for adding instructions (if needed), and for providing other features and functions. In some examples motion sensing is one means of exercising remote control 1327 such as in some examples the Wii Remote, Wii Nunchuck and Wii MotionPlus for Nintendo's Wii game console (which use features such as accelerometers, optical sensors, buttons, “rumble” feedback, gyroscope, a small speaker, sensor bar, an on-screen pointer, etc.). Remote controls 1327 typically communicate by IR (Infrared) signals, Bluetooth or radio signals. In some examples of using a remote control 1327 a user presses one or a plurality of real buttons (or virtual buttons or images on a graphical touchscreen) to directly operate 1327 a local TP device: or in some examples to control 1327 another device that the TP device controls (such as in some examples when a TP device remote controls a PC 1327, in some examples when a TP device remote controls a television set top box 1327, in some examples when a TP device remote controls another TP device 1327, in some examples when a TP device remote controls a different type of electronic device 1327).

In some examples said user I/O device(s) 1321 and/or user I/O interface(s) 1321 is another type of user I/O device 1328 such as in some examples a graphics tablet or digitizing tablet 1328; in some examples a puck 1328 (which in some examples is used in CAD/CAM/CAE tracing); in some examples a standard or specialized keyboard 1328; in some examples a configured smart phone 1328; in some examples a configured electronic tablet or pad 1328; in some examples a specialized version of a touch interface may be controlled by a light pen 1328; in some examples eye tracking 1328 (in some examples control by eye movements); in some examples a gyroscopic mouse 1328 (in some examples a mouse that can be moved through the air and used while standing up); in some examples gestures with a tracking device 1328 (in some examples for controlling a device with physical movements with the gestures performed by a hand in some examples, by a mouse in some examples, by a stylus in some examples, or by other means); in some examples a game pad 1328; in some examples a balance board 1328 (in some examples for exercising with a video game system); in some examples a dance pad 1328 (in some examples for dance input during a game); in some examples a simulated gun 1328 (in some examples for shooting screen objects during a game); in some examples a simulated steering wheel 1328 (in some examples for driving a vehicle during a game); in some examples a simulated yoke 1328 (in some examples for flying a plane during a game); in some examples a simulated sword 1328 (in some examples for virtual fighting during a game); in some examples simulated sports equipment 1328 (such as a simulated tennis racket in some examples such as for playing a sport during a game); in some examples a simulated musical instrument(s) 1328 (such as a simulated guitar in some examples such as for playing an instrument during a musical game); in some examples sensors 1328 (in some examples sensors observe a user[s] and respond to inferred needs without the user providing an explicit command); in some examples another type of user I/O device 1328.

In some examples these varied user I/O devices 1323, features 1323, capabilities 1323, etc. are components of providing a customized, personalized yet consistent interface for the various TP devices employed by each user—as described in FIG. 7 through FIG. 9, in FIG. 17, FIG. 183 through FIG. 187, and elsewhere. In some examples these varied user I/O devices 1323, features 1323, capabilities 1323, etc. are components of providing a customized, personalized yet consistent interface for the various subsidiary devices employed by each user through the use of TP devices—as described in FIG. 7 through FIG. 9, in FIG. 17, FIG. 183 through FIG. 187, and elsewhere. In some examples these varied user I/O devices 1323, features 1323, capabilities 1323, etc. are components of providing a customized, personalized yet consistent interface for the various AIDs/AODs employed by each user as extensions of Teleportaling—as described in FIG. 9, FIG. 17, and elsewhere. In some examples of this, such as in FIG. 186, interface components 9298 may be stored and retrieved from repositories 9306 9309 and applied a new interface designs 9300 9301 to construct various new services 9302 9303 9308 or to update existing services 9304 9301 9302 9303 9308. In some examples this provides consistent that are useful and predictable across a broad range of varied user I/O devices 1324 1325 1326 1327 1328 for numerous core functions of a digital environment such as communicating, viewing, recording, creating, editing, broadcasting, etc. with multiple simultaneous input and output streams and channels for use on TP devices of varying capabilities and form factors.

In some examples after determining the type of command it is by determining the type of user I/O device that originates said command 1324 1325 1326 1327 1328, and the command issued by said user I/O device 1324 1325 1326 1327 1328, said command 1323 is received 1330. In some examples said command 1323 1324 1325 1326 1327 1328 is a TP device command 1331 that is immediately recognized such as in some examples to select and SPLS, in some examples to open an SPLS, and in some examples to open a focused connection with one or a plurality of SPLS members. In some examples said TP device command 1331 is immediately applied to the appropriate Device in Use (DIU) which in some examples is a Local Teleportal 1335; in some examples is a Remote Teleportal 1335; in some examples is on a Teleportal network such as in some examples a Teleportal Server 1335, in some examples a TP service 1335, etc.; in some examples is a TP application 1335; in some examples is a subsystem 1336 in a TP device 1335; in some examples is a TP subsystem 1336 controlled by an RCTP (Remote Control Teleportal) 1337; in some examples is a TP subsystem 1336 controlled by a VTP (Virtual Teleportal) 1338; in some examples is an RCTP (Remote Control Teleportal) 1337; and in some examples is a VTP (Virtual Teleportal) 1338.

In some examples said entered command 1323 1324 1325 1326 1327 1328 is not a TP device command 1331, but instead it is a known I/O device 1332 whose commands are recognized as relating to a specific DIU (Device in Use) 1335 1336 1337 1338; or in some examples said command is a known device command 1332 that applies to a particular DIU 1335 1336 1337 1338. In some examples a known I/O device command 1332 is not a TP device command 1331, so it is translated 1333 by receiving the command sent 1323 1324 1325 1326 1327 1328 and determining the TP command 1333 1334 necessary to perform the requested action. In some examples entering a command 1323 on a user I/O device 1324 1325 1326 1327 1328 that is directed toward a particular DIU such as in some examples a subsidiary device 1337 controlled by an RCTP, or in some examples an AID/AOD 1338 controlled by a VTP, causes an automated command translation 1332 1333 1334 which in some examples retrieves from (local or remote) storage 1334 a list of available commands for said DIU and each of their RCTP parallel commands 1337, and each of their VTP parallel commands 1338. Said translation 1333 1334 selects the appropriate RCTP command 1337, or VTP command 1338, as needed for the particular DIU that is being controlled 1337 1338. Said translated command 1333 1334 is then sent to the particular DIU 1337 1338 to perform the requested action.

In some examples said entered command 1323 1324 1325 1326 1327 1328 is not a TP device command 1331, and it is also not a known I/O device command 1332, and it is also not a known device command 1332 that applies to a particular Device in Use (DIU) 1335 1336 1337 1338, so in some examples a new user I/O device 1340 may be added; in some examples a new feature 1340 may be added to an existing user I/O device 1323 1324 1325 1326 1327 1328; and in some examples a new command 1340 may be added to an existing user I/O device 1323 1324 1325 1326 1327 1328. In some examples the addition of a new user I/O device 1340, a new feature 1340 to an existing user I/O device, or a new command 1340 to an existing user I/O device (herein collectively referred to as an “Addition”) starts by an initiating said Addition 1341; in some examples said Addition 1341 requires (optionally) automatically or manually retrieving 1342 the appropriate configuration from (local or remote) storage 1343 (which may include in some examples an installation CD-ROM 1342, in some examples an installation DVD 1342, in some examples a manual or automated download 1342, or in some examples other manual or automated means for retrieving 1342 1343 a configuration); in some examples configuration 1344 of said Addition is automated while in some examples configuration 1344 is a manual step; in some examples one or a plurality of (optional) tests 1345 may be performed automatically and visibly, in some examples said tests 1345 may be performed automatically and invisibly, in some examples said tests 1345 may be performed manually, and in some examples testing 1345 is not performed; in some examples tests 1345 are performed and if one or more parts of said tests fail re-configuration 1344 may be performed, or (optionally) a different configuration may be retrieved 1342 1343 to perform said re-configuration 1344; in some examples use 1346 of said Addition requires the user or the system to modify the Addition and in such a case re-configuration 1344 may be performed, or (optionally) a different configuration may be retrieved 1342 1343 to perform said re-configuration 1344; in some examples use 1346 of said Addition accomplishes the desired result so that said Addition 1340 is complete and goes into use 1321.

Universal remote control: One category of user I/O devices 1321—a TP Universal Remote Control (URC) 1327—has the potential to improve the use of other digital devices substantially, because said TP remote controls 1327 separate their use from the need to control each TP device directly and individually—making it possible to use and control one or a plurality of devices from a single portable and wireless controller. Said URC is described in FIG. 36 and FIG. 37:

FIG. 36, “Universal Remote Control”: In some examples a universal remote control can be used to control the use of other TP devices. In some examples said controlled TP devices may be used to control TP subsidiary devices (as described elsewhere); and in some examples said controlled TP devices may be used to control RTPs (as described elsewhere). In such a case said controlled TP devices do not need to each be run directly and personally; instead, a plurality of TP devices and their plurality of digital realities may be chosen, Ron, created, used, etc. from one or a plurality of TP remote controls.

FIG. 37, “Universal Remote Control Interface”: In some examples a single remote control may dynamically discover and take control of a plurality of TP devices so that a user may select and control one or a plurality of controllable devices. In some examples said remote control displays scrollable or selectable portions of a selected device's interface; In some examples said remote control displays a selected device's control interface; in some examples the remote control displays a specialized control interface; and in some examples the remote control displays a subset of a device's interface (or its control interface). In some examples a remote control's interface may be updated with marketing messages or advertising such as in some examples by fitting a user's behavior and use of a TP device, and in some examples by repeating a set of marketing messages in accordance with advertiser specifications and advertisement purchases.

Turning now to FIG. 36, “Universal Remote Control (URC),” with one or a plurality of TP remote controls 1370 a user may utilize one or a plurality of TP devices 1380 1385 in some examples; utilize one or a plurality of TP subsidiary devices 1387 in some examples; and/or be utilized by one or a plurality of AIDs/AODs 1386 in some examples—literally a range of digital devices 1380 1385 1386 1387 and digital capabilities—without needing to run each one of them personally and directly. Instead, a growing range of digital devices, environments, tools, services, applications, etc. 1380 1385 1386 1387—together, a plurality of digital realities—may be created, run and used from one or a plurality of TP remote controls 1370.

As a result in some examples a Universal Remote Control (herein URC) provides a consistent system wherein the devices, services, applications, etc. 1380 1385 1386 1387 (which in some examples may also be other types of electronic devices) and the associated remote control(s) 1370 automatically connect and communicate as soon as both have power and are turned on—in other words, using this universal remote control system is automated.

URC 1370: In some examples said URC 1370 includes a display screen 1372 1374 and one or more means for user input 1372 1373 1375 which in some examples includes a touchscreen 1372 1375, in some examples includes physical buttons 1373 1375, and in some examples include other user input means such as described in user I/O devices in FIG. 35 and elsewhere. Said URC 1370 also includes wireless communications 1376 that may employ any type of wireless communications (which in some examples is WiFi 1376 1388, in some examples line-of-sight IR {Infrared] 1376 1388, in some examples radio 1376 1388, in some examples Bluetooth 1376 1388, and in some examples other means for wireless communication 1376 1388) that is configured to communicate with one or a plurality of devices 1380 1388 and can couple together an enabled URC(s) 1370 and an enabled device(s) 1380. In some examples said URC's display screen 1372 1374 displays one or a plurality of components of said controlled device's interface 1381 1383 where said display 1372 1374 may employ any type of display (which in some examples is an LCD [Liquid Crystal Display] that includes a touchscreen for user input). In some examples said URC 1370 includes a processor 1377 which may employ any type of computer processor (which in some examples is a CPU [Central Processing Unit] 1377, in some examples is a DSP [Digital Signal Processor] 1377, in some examples is a microcontroller 1377, in some examples is a device controller 1377, in some examples is a computation engine 1377 and in some examples is other means for processing 1377). In some examples said URC 1370 includes local memory 1378 and local storage 1379 which may employ any type of volatile and non-volatile storage that can hold data in some examples when the URC 1370 is powered down, and in some examples when the URC 1370 is on and processing (which in some examples is RAM [Random-Access Memory] 1378, in some examples SRAM [Static RAM] 1378, in some examples DRAM [Dynamic RAM] 1378, in some examples a hard drive 1379, in some examples flash memory 1379, in some examples ROM [Read-Only Memory] 1379, in some examples EPROM [Erasable Programmable Read-Only Memory] 1379, in some examples an optical disk drive 1379, and in some examples is other means for memory 1378 and storage 1379).

TP device(s) remote control processing 1380: In addition to other hardware, functions, features and capabilities as described elsewhere, in some examples a TP device that is enabled for remote control includes Remote Control Processing (herein RCP) 1380. In some examples said RCP 1380 includes wireless communications 1388 that may employ any type of wireless communications (which in some examples is WiFi 1388 1376, in some examples is IR 1388 1376, in some examples is radio 1388 1376, in some examples is Bluetooth 1388 1376, and in some examples is other means for wireless communication 1388 1376) that is configured to communicate with one or a plurality of URC's 1370 1376 and can couple together an enabled URC(s) 1370 and an enabled device(s) 1380. In some examples said RCP 1380 includes processing 1383 1382 which in some examples employs the device's 1380 processor(s), and in some examples employs another processor(s) 1383 1382 which may be any type of computer processor (which in some examples is a microcontroller 1383 1382, in some examples is a DSP [Digital Signal Processor] 1383 1382, in some examples is a GPU 1383 1382, in some examples is a device controller 1383 1382, in some examples is a computation engine 1383 1382 and in some examples is other means for processing 1383 1382). In some examples said RCP 1380 includes local memory and local storage which may employ any type of volatile and non-volatile storage that can hold data in some examples when RCP 1380 is powered down, and in some examples when RCP 1380 is on and processing (which in some examples is the device's local memory and local storage, and in some examples is additional memory and/or additional storage).

Remote control of TP Devices: In some examples each TP device's RCP 1380 1381 includes interface processing 1383 that extracts the control and navigation components of the device's interface 1381 1383 as if that were presented in a small interface control window on its display. In some examples said interface processing 1383 utilizes a markup language that renders and describes a GUI (Graphical User Interface) 1383, controls 1383, as well as include data 1383 (which in some examples is HTML 1383, in some examples is XML 1383, in some examples is XHTML 1383, in some examples is another user interface markup language 1383 that provides reuse for presenting a user interface). Instead of displaying said processed interface control window 1383 on the device's display 1381, said processed interface control window is communicated 1388 through a wireless connection to a URC's communications 1376, and displayed 1374 on the URC's display 1372. When a user interacts with the URC's display interface 1372 1374, the user's inputs 1372 1373 1375 are communicated 1376 to the device's RCP communications 1388 where said user's remote control inputs 1375 are received 1384, processed 1382 as if they were entered on a small interface control window on the local display, and said user inputs control the device 1381 (in some examples as described in FIG. 35 and elsewhere). In some examples said small interface control window includes RCTP control of a subsidiary device(s) 1387 as described elsewhere. In some examples said small interface control window includes control over an RTP(s) 1385 as described elsewhere.

Therefore, without constructing an “intelligent” remote control device or system, the TP's URC provides remote control 1370 over one or a plurality of devices 1380 1385 1387 through a scalable system of extending the display of a device's interface 1381 1383 1384 1388 to a remote control 1370 1371 where it is received and displayed 1376 1374 1372, and a user's inputs on said URC 1372 1373 1375 are communicated 1376 1388 and processed by said RPC 1384 1382 1381. As a result, in some examples a URC 1370 operates a TP device 1380 as if a user had interacted directly with an interface window that was displayed on the TP device's display, and therefore the URC 1370 controls said TP device 1380 from its remote display 1374 1372 of that rendered interface window, and a user's inputs 1372 1373 1375 are communicated 1376 1388 to said device's RCP 1388 1384 1382 1381. As resulting and continuing steps after using each said input 1375 1382, said device's interface 1381 is processed and updated 1383, said updated interface is communicated by the device 1384 1388 to the URC 1376 where the updated interface is displayed 1374 1372 and ready for further user inputs 1372 1373 1375—in the same continuous process as if the device's interface were being used locally.

In some examples for a particular device (such as in some examples a TP subsidiary device 1387, and in some examples and AID/AOD 1386) a URC 1370 may load a RCTP (Remote-Control Teleportal) from its storage 1379, run said RCTP for that device by means of the URC's processor 1377 and memory 1378, utilize communications 1376 1388 to control a TP device 1380 and thereby communicate with the particular subsidiary device or AID/AOD under control, display said RCTP on the URC's display screen 1372 1374, accept user inputs 1372 1373 1375 to said RCTP by means described elsewhere, and communicate 1376 1388 said user inputs to control said TP device 1380. In some examples a URC 1370 and for a particular device (such as in some examples a TP subsidiary device 1387, and in some examples and AID/AOD 1386) a URC may load a VTP (Virtual Teleportal) from its storage 1379, run said VTP by means of the URC's processor 1377 and memory 1378, utilize communications 1376 1388 to control a TP device 1380 and thereby communicate with a subsidiary device or an AID/AOD under control, display said VTP on the URC's display screen 1372 1374, accept user inputs 1372 1373 1375 to said VTP by means described elsewhere, and communicate 1376 1388 said user inputs to control said TP device 1380. In some examples for a particular device such as in some examples a TP subsidiary device 1387, a URC 1370 may display the part of a TP device's interface 1380 1381 that controls said TP subsidiary device 1387; such as in that example the TP device 1380 runs a RCTP that controls the subsidiary device 1387, and the URC displays the TP device's RCTP so the user can control the RCTP and subsidiary device by means of the URC 1370. In some examples a direct display of a device's interface may be less effective, even with translation of commands (as described elsewhere), such as in some examples for various types of TP subsidiary devices 1387, and in some examples for various types of AIDs/AODs 1386.

Remote control of some Subsidiary Devices 1387 (by means such as an RCTP), and/or by some AIDs/AODs 1386 (by means such as a VTP): In some examples a TP device is used to control some of one or a plurality of subsidiary devices by means of RCTP (Remote Control Teleportaling); in some examples said TP device's interface processing 1384 1383 includes the capability to translate one or a plurality of commands for a subsidiary device 1387 or for an AID/AOD 1386 as described in 1322 1333 1334 FIG. 35 and elsewhere, and display those translated commands as if they were a TP device interface such as described herein 1381 1383 1384, in FIGS. 183 through 187 and elsewhere. Therefore in some examples, the interface to control some of a subsidiary device 1387 or some of an AID/AOD 1386 is processed to appear the same as or similar to a TP device interface 1383 as if they were a TP device. Furthermore, in some examples that translated and mapped TP device interface 1383 is communicated 1384 1388 to a URC 1376—so that a URC 1370 1371 may control a TP device 1381 1385 in some examples, a subsidiary device 1387 in some examples, or an AID/AOD 1386 in some examples. In some examples extracting the control and navigation components and/or commands that match a TP device interface and presenting them on the remote control's display similar to a TP device's interface produces a wireless connection and an interactive remote control display of those commands that may be executed on a subsidiary device 1387 or on an AID/AOD 1386. When a user employs the URC 1370 1371, it operates through the RCP 1380 and its command translation to remotely control some of a subsidiary device 1387 or some of an AID/AOD 1386. Therefore, without constructing an “intelligent” remote control device or system, this provides some remote control over one or a plurality of devices through a scalable system of interactive interface extension.

Turning now to FIG. 37, “Universal Remote Control Interface (URCI),” in some examples a device is turned on 1350 (such as described in 1380 and elsewhere) and said device is waiting for a URC to send its ID or its user's input(s). To start discovering and connecting to devices 1380 a URC 1370 must be turned on, at which point the default is for the URC's communications 1376 to broadcast its last used user ID as it discovery command 1351. Optionally, a user may select a different identity 1352 for a URC 1351 (as described elsewhere), and optionally one or a plurality of said user's identities may require authentication (as described elsewhere). Optionally, turning on a URC may have a default setting to require identity selection 1352 and authentication 1352 to prevent taking control of a secure device by means of its URC. Devices (such as an enabled and configured TP device 1380 in some examples) that receive the communicated discovery command communicate a response 1388 that is received by the URC 1353. In some examples said discovery process 1351 1352 1353 1354 occurs automatically for each discovered device; in some examples said discovery process may have one or a plurality of errors 1354 in which case AKM instructions (Active Knowledge Machine guidance, as described elsewhere) for manual discovery and connection may be displayed in some examples on the URC's screen 1354 1370, and in some examples on the device's screen 1354 1380. This discovery and communication process 1351 1352 1353 1354 repeats until the available devices have been discovered and subsequent preparation steps have been performed (1355 1356 1357 1358 as described below). Thereafter, previously discovered devices do not need to be rediscovered when they are used. In addition, said URC periodically broadcasts to discover new devices 1351. Also additionally, said user may choose a different identity 1352, in which case said URC broadcasts 1351 to discover devices appropriate for that identity. Also additionally, said user may add a plurality of identities for simultaneous use 1352, in which case said URC broadcasts 1351 to discover devices appropriate for that user's current set of open identities.

URC display of a device: A device's response 1353 may optionally cause a URC to display in some examples the newly connected device's name 1354, in some examples the device's manufacturer's logo 1354, in some examples a list of controllable functions for user selection 1354 (such as if an LTP in some examples can open one or a plurality of SPLS's 1354, in some examples open one or a plurality of focused connections 1354, in some examples watch one or a plurality of broadcasts 1354 by selecting between a plurality of sources, in some examples play a pre-recorded DVD movie 1354, in some examples provide other functions 1354), etc. Optionally, in some examples one or a plurality of portions of said initial or subsequent display (such as in some examples a manufacturer's logo, in some examples the device's name, in some examples the list of controllable features available, in some examples other information or video) may be communicated 1388 from said controlled device's storage; in some examples one or a plurality of portions of said initial or subsequent displays may be pre-stored on said URC 1379 and displayed 1372 1374 from said URC's storage 1379; in some examples one or a plurality of portions of said initial or subsequent displays may be stored remotely and retrieved by said controlled device 1380, then downloaded and communicated 1388 to said URC 1376 and displayed by said URC 1354 1372 1374.

Device selection (list, interface, navigation, etc.): In some examples as a device is discovered and connected 1351 1352 1353 1354 it is added to a device list 1355 of one or a plurality of controllable devices that may be accessed at any time to select a device to control 1360, and when said device list is accessed 1355 it is displayed on the URC 1372 1374 so that a user can select the desired device to control 1360. In some examples said device list 1355 is text; in some examples said device list 1355 is graphical icons; in some examples said device list 1355 is hypertext links; in some examples said device list 1355 is a menu; in some examples said device list 1355 is an interface widget (such as a graphical map, a pulldown list or another type of widget interface); in some examples said device list 1355 and device selection 1360 is provided by other navigation and/or other interface means. In some examples said device list 1355 includes too many devices to fit on one URC screen, and in this case various types of known navigation may be used such as in some examples multiple URC screens with navigation between the screens 1355; in some examples devices may be grouped in device categories (such as in some examples categories such as TP devices, PCs/computers, other subsidiary electronic devices, AIDs/AODs, etc.) so that one selection screen 1355 utilizes a hierarchy of categories and each category's list of devices; in some examples other means for a device selection interface and navigation may be employed to find and select a larger number of devices.

Device Interface communications and use: In some examples as each device is added to said device list 1355 it's Device Interface (herein DI) is downloaded 1356 to the URC and stored in memory 1378 so that said DI is immediately available to be displayed 1361 as soon as a specific device is selected 1360. In some examples said DI is downloaded from a device 1357; in some examples said DI is downloaded from another source 1358; in some examples parts of said DI have been previously downloaded to the URC (such as in some examples a manufacturer's logo, in some examples a list of controllable device features that may be selected, and in some examples other data) and is stored 1379 in said URC for repeated uses over time. As described elsewhere, in some examples as said DI is used 1361 1369 1362 it is displayed on the URC 1372 1374; in some examples a user interacts with said DI 1362 on the URC by means such as a touchscreen 1372, or buttons 1373, or any type of input 1375 or interaction; in some examples the user's input(s) are communicated 1363 by means of URC communications 1376 to the controlled device's communications 1388; in some examples the user's input or command is performed by the controlled device 1384 1382; in some examples the controlled device's interface is (optionally) updated 1381 1383 by processing means described elsewhere (because in some examples an operation may only be started and stopped such as by selecting a play or pause button without needing to update the interface, while in some examples an operation may be changed such as by displaying an EPG [Electronic Program Guide] to end one broadcast by choosing a different broadcast and start playing it); in some examples the updated DI is communicated by communications on the controlled device 1384 1388 and received by the URC's communications 1376; in some examples an entirely updated DI is displayed 1374 1372 for use on the URC as needed 1365 1362, while in some examples secondary information is all that is updated such as adding information relating to a current function (such as in some examples the title of a movie that is being watched, or in some examples the name and background data of the identity in a focused connection).

Subsidiary devices and AIDs/AODs: In some examples a device is a subsidiary device 1387 or an AID/AOD 1386, then each step in this continuous control process 1369 1362 1363 1364 1365 is performed by utilizing command translation and interface means described elsewhere, with the result that in some cases very little control 1369 is possible, in some cases some features may be controlled 1369 but other features are not available, and in some cases considerable control 1369 may be used from a URC. In some examples a device is a subsidiary device 1387 or an AID/AOD 1386, then each step in this continuous control process 1369 1362 1363 1364 1365 is performed by utilizing RCTP means or VTP means described elsewhere and displaying said RCTP interface (in a single whole screen or in segmented parts), or VTP interface (in a single whole screen or in segmented parts), in the interface window 1372 on the URC 1371 1370, with the result that in some cases very little control 1369 is possible, in some cases some features may be controlled 1369 but other features are not available, and in some cases considerable control 1369 may be used from a URC.

Advertising and marketing: In some examples the URC's 1371 display 1372 may be updated with marketing or advertising messages such as in some examples each device vendor offering newer or upgraded models for sale: in some examples third-party retailers offering competing devices for sale; or in some examples behavioral tracking identifying a user's task(s) and offering products or services that fit said user's needs. In some examples said advertising and marketing process is attached to an external selling service or system that analyzes said data and provides specific advertisements that in some examples are based on the user's needs, in some examples are based on the user's context of use, and in some examples are based on what the vendor is trying to sell. In some examples this updating process 1369 (whether in some examples it is based upon using a controlled device 1380 1381 with a URC 1370 1371, or in some examples it is based on advertising and marketing) is repeated continuously 1362 1363 1364 1365 for each user input on each device selected.

Other high-level selections: In some examples a user selects a different device to use 1366 by using components of the URC interface 1372 1374 to display the list of controllable devices 1355 and selecting a different controllable device 1360, which has been discovered previously 1353 and had its DI downloaded 1356, so that when selected 1360 said new device's DI is immediately available for display and use 1361 four the available functions that may be controlled from the URC 1369 1362 1363 1364 1365. In some examples a user connects to a new or remote device 1367 by coming into range of it and automatically discovering it 1368 1351 1353 1354; while in some examples a user connects to a new or remote device 1367 by manually connecting to it 1368 1351 1353 1354 with the URC (such as in some examples a TP device 1380, in some examples a TP subsidiary device 1387, in some examples an AID/AOD 1386, or in some examples another type of device).

A world with Teleportal devices includes Remote Teleportals (herein RTPs)—which comprise Teleportal devices in a plurality of fixed and mobile locations to view those physical locations, provide live viewing of an RTP location(s), (optional) two-way communications with that place(s), gather various kinds of data from said place(s), and transform one or a plurality of RTP places' physical realities into multiple types of broadcasted and/or recorded digital realities.

In some examples RTPs extend and expand the current growth of GIS (Geographic Information Systems) and augmented reality. These current and emerging technologies include GPS (Geographic Positioning Systems), turn by turn directions, Google streetview, augmented maps that identify places we want to find, and many more new and emerging services such as pointing a smart phone's camera at a landmark and having Augmented Reality data (such as a restaurant menu, another customer's comments or a landmark's Wikipedia entry) displayed automatically. Together these are creating a “knowing world” with wireless services and systems that provide route guidance, information, and answers at many locations along the way. In such a world, RTP's are just one more eye to find the same destination to which everyone is traveling.

That “knowing world” may not be the biggest or the best prize. While those who live in it will be safer and more informed this will be a paternalistic world whose systems turn its users into bystanders and observers even while they travel through their guided and information-rich physical environment. Instead of discovering, interacting and deciding or creating at every step, they are led turn-by-turn through the authorized ways of how to go everywhere, told the approved information about what they are seeing, and directed to what they should see and know during their journey. Their structured world will take them to far worse destinations than what their goal seem to be at first. In the end a “knowing world” will organize the world's people—it is the people who will be directed, structured and known—as they are turned into sleepwalkers who are herded through a reality they don't own or control, guided to destinations that are curated and presented as if it were the only world in which they can and should live.

In some examples, however, Remote Teleportals provide new types of systems for constructing one or a plurality of digital realities out of our physical reality and sometimes beyond it, in addition to providing the standard live or augmented views of each physical place. In some examples multiple constructed realities are simultaneously broadcast from a single RTP's fixed or mobile locations, so that those who view that location remotely (as well as those who are in that place and view it digitally) can enjoy it as it is—and switch immediately to one or a plurality of creatively altered digital realities, according to the desires and tastes of one or a plurality of digital creators.

As will be demonstrated, the potentials of these multiple “digital realities” may be more dynamic, dramatic, artistic, fertile, inspired, visionary, original and “cool” than the “physical reality” they replace. In a brief summary, an RTP (as well as other TP device processing that may also be broadcast, such as LTP's and MTP's that are mobile) provide means to turn physical reality into a broadcasted stage, with tools that one or a plurality of creative imaginations can use to transform the ordinary into a plurality of digital versions of reality that anyone can choose to enjoy or alter further, rather than be guided through by today's GIS and augmented reality systems. These RTP digital realities are not under any type of control, are not curated, nor are they paternalistic. Rather than guiding us, they give us the freedom to represent reality in any way we want.

Each has different types of value: Today's emerging GIS, GPS and augmented reality systems enhance physical reality and RTPs can show that. In addition, RTPs also diverge from physical reality and provide means to transform the world—one place and one vision at a time—into a plurality of digital realities that might make the world into a plurality of more interesting, entertaining compelling, or powerful visions of reality than existed before.

Some examples include: Art and music realities (Artists and musicians can add overlays to locations, adding sculpture gardens, static images, dynamically moving artworks, re-decorated buildings, creative digital interactions, musical themes and much more to numerous locations. Services can randomize these overlays and additions with various themed templates, allowing numerous artists to transform multiple physical places from the ordinary into the extraordinary); Graffiti realities (Graffiti artists and edgy musicians can add overlays and substitutions to locations, turning the world upside with their divergent creations); A living, natural restored reality (Transformative programs could allow environmentalists to GPS an outdoor location, identify its natural plant and animal species, then overlay a fully restored scene over the current [usually badly managed] physical location—showing what it would look like if its natural plants and animals were restored to their full populations with that place's natural carrying capacity—then periodically switching back and forth to show the contrast between what nature would produce and that place after it was “civilized”); Events (Couple fixed or mobile RTPs with events, and broadcast digital events with accessible digital presences [such as live, recorded, or both] for interested audiences, as described elsewhere in more detail); Alerts realities (Couple various types of RTP sensors and systems with digital alerts so a plurality of “alerts channels” auto-display the types of events different people would like to see wherever they appear, as soon as they happen anywhere. Sound-based channels can jump to the latest location based on a type of sound such as guns firing [violent crimes, political repressions, firefights in war zones, etc.], car accidents, sirens or alarms, the sound of a person screaming, or more); Celebrities realities (Identity-based channels can jump to sightings of celebrities, political leaders, newsmakers, etc. [who are placed on face recognition “white lists”] by those who use templates and identifiers to create one or a plurality of “celebrity alert channels,” “politician alert channels,” “newsmaker alert channels,” etc.); Persons realities (Identity-based channels can jump to sightings of the people in one's life such as family, friends, co-workers, business associates, etc. [who are placed on face recognition “white lists”] by those who use templates and identifiers to create one or a plurality of “family alert channels,” “friends alert channels,” “co-worker and business alert channels,” etc.); Privacy realities (Couple RTP displays to face distortion software for those who put themselves on “privacy lists,” so when they're in public they're covered up in “RTP digital realities.”); Superhero realities (Extract “super heroes” from different types of movies or other sources, and extract sports figures in action from different types of sports events. Then cruise them through real locations, whether standing and walking, or performing their sport [such as catching a pass, running, snowboarding, skydiving, etc.], or performing daring missions [such as from superheroes sequences in movies and television]. These can be overlaid into real places, both as if they were normally present, and also as if they were performing sports there, or fighting villains and saving that world); Healthy/Overstuffed realities (Reshape the people in a place by slimming those who are overweight so they are all height-weight proportionate, or inflate and parody the people so everyone there is obese.) Militarized/Demilitarized realities (Extract uniformed military and police, and their vehicles, and overlay them into locations so those places appear completely controlled police states. Or conversely, remove police from locations where they are normally positioned in force—to show how those places would look if they were not directly controlled by that government's police and military); Revolutionary realities (Digitally alter weapons in dictatorships such as by putting flowers in gun barrels, revolutionary graffiti on tanks and military vehicles, overlaid revolutionary political slogans on government buildings, and more, with these digital realities processed abroad and broadcast into dictatorial countries); Utopian realities (A variety ideals may be dynamically visualized and overlaid on everyday places to show what they would be like if each of those ideals came true).

Multiple realities that produce new revenues and income: Audiences have value and can be monetized—and larger audiences earn more money—so the most popular digital realities, with larger audiences, are the most attractive for those who want to monetize all or parts of their RTP's outputs. An RTP's stream(s) can be received at one's local TP devices or on network devices, transformed into new digital realities, and rebroadcast—so one RTP's streams can produce multiple incomes, some of which are sharable with the RTP's source and some of which are unique to a creator. If wanted, a transformed stream(s) can be substituted for the original physical reality stream's at a source RTP(s) as if it were the real source (as described elsewhere), or broadcast as additional digital reality streams directly from a source RTP(s)—the revenues from those audiences can be turned into revenues for both the RTP owners who create the original streams, and for those who create compelling digital realities that attract audiences.

With RTP-constructed digital realities one or a plurality of RTP owners and additional creators could simultaneously redesign the physical world's live or recorded streams in a plurality of ways and broadcast the transformations from one or a plurality of sources such as RTP's, LTP's, MTP's, etc. Those in the audience(s) can choose the versions of reality they prefer and want—with the audience including both remote observers and those in that place but using their TP screens to be guided through one of its digital transformations.

Then, as each person uses a screen to go through the world they can choose which digital reality(ies) in which they want to live. The “knowing world” of GIS, GPS and augmented reality becomes just one option that can now compete with a plurality of constructed and imaginative digital realities—which can be designed to be more entertaining, more self-determined and more user-centered than the step-by-step “packaged reality” of GPS and augmented reality systems.

RTP-constructed digital realities may also be coupled with the ARM (Alternate Realities Machine, as described elsewhere) so that each person sets their own boundaries of what they want to include and exclude from their self-chosen “world(s)” (as described elsewhere). The ARM's personal boundaries prioritize (include) what a recipient wants, block or diminish what a recipient does not want, and adds additional capabilities such as paywalls (which require those who want a person's attention to pay for that attention or be blocked instead), and protection (as described elsewhere).

RTP-constructed digital realities may also be coupled with Governances (as described elsewhere) so that groups may collectively construct digital realities (and optionally set their members' ARM boundaries) to fit each type of digital reality they choose to create (such as the three example governances described herein: IndividualISM's that expand self-directed personal freedoms, CorporatISM's that sell comprehensive solutions like entire lifestyles and living standards, and WorldISM's that support collective actions [like environmentalism] that transcend nation-state borders).

Taken together, it is clear that RTP processes of constructing digital realities have some differences from physical presence and GPS/augmented reality systems, especially since RTP's stream much more than “live” reality—RTP's may stream digital realities that may be altered in a plurality of locations by a plurality of creative imaginations—each for their own different purposes—and then (optionally) substituted and streamed as if their alteration(s) were the real source. Those who receive either “live” or constructed digital realities may also alter the received digital realities further during their presentation, if they impose their own self-selected boundaries during reception and local presentation by means such as the ARM (Alternate Realities Machine), governances boundaries, etc. as described elsewhere. Some examples of alterations during reception and presentation include prioritizing what each receiver desires, excluding what each receiver does not want, and applying other filters such as a Paywall so that receivers earn income for providing their scarce attention to specific added components such as to a specific product, brand or organization (that may be added during creation or during reception).

Therefore in some examples a meta-view of digital reality includes both the construction of a digital reality(ies) to suit varying goals, entertainments, desires, envisioned worlds, etc.; and also the filtering and altered presentation of said “real” and also digital realities as part of receiving them, so that a combination of a real place, creative digital reality constructions, and receivers' boundaries and alterations are simultaneous co-participants in creating the final digital reality(ies) experienced and enjoyed—with multiple monetization opportunities for multiple participants in this (value creation) chain. In combination with other capabilities described herein, RTP constructed digital realities are a way to grow beyond physical limits by providing devices, tools, resources and systems so that a plurality of creators and receivers may help choose, construct, live in and earn monies from any digital realities they prefer to ordinary physical reality. Over time, a plurality of constructed digital realities may be preferred to the ordinary physical world and may in some examples provide greater monetization opportunities and revenues for more participants (including recipients) than a controlled and “packaged” physical reality. If they choose, a plurality may try to shatter the glass ceiling between who they are and what they aspire to become by bringing the world they desire to (digital) “life,” then live their lives as they would like to “see themselves,” or perhaps in a simpler description, create the digital identities they would like to become and live the one or plurality of digital lifestyles they prefer.

Instead of strait-jacketed GPS and augmented reality systems that turn people into organized sleepwalkers who are herded through a curated and “knowing” world, some who think for themselves may attempt a breakaway and envision both their dreams and how they can become the independent actors who create and journey through digital realities that support their dreams. They may define or choose the constructed digital reality(ies) they want, instead of passing through a pre-defined physical reality that controls itself and them at the same time.

RTP processing: Together FIGS. 38 through 40 illustrate some examples of RTP processing including processing within a single RTP; a plurality of locations where the processing of RTP data may be performed; and resources that may be created and used to construct digital realities (as well as expand their use and increase their revenues); similar processes for constructing digital realities may in some examples be employed by other TP devices. Together FIGS. 41 through 42 illustrate some examples of deriving success metrics from digital realities and utilizing them for goals such as monetization, their rate of use and growth, etc. In addition, FIG. 43 illustrates some examples for using digital realities in ARM (Alternate Realities Machine) boundaries settings.

FIG. 38, “RTP Processing—Digital Realities”: In some examples RTPs (Remote Teleportals) are TP devices that contain both sensors and sufficient processing power to construct and deliver a plurality of synthesized digital realities under the control of one or a plurality of remote users. Much more than WebCams or surveillance systems, RTPs utilize live and recorded data to perform one or a plurality of separations, replacements, blendings, compression, encoding, streaming, etc. so that those who view that RTP location(s) remotely can enjoy it either as is, or switch immediately to one or a plurality of creatively altered digital realities, according to the desires and tastes of one or a plurality of digital creators. Each different synthesized digital reality can be turned on or off based upon audience presence indications so that numerous types of digital realities can be available for real-time construction, streaming and use as soon as audience members select each one, with that digital reality turned off and stored as “available” when no audience members are utilizing it. In addition these examples of constructing digital realities may in some examples be performed by other networked electronic devices such as in some examples Local Teleportals, in some examples Mobile Teleportals, in some examples network servers or applications, and in some examples other devices or means described elsewhere.

FIG. 39, “RTP Processing Locations”: In some examples some or all RTP processing is performed by an RTP device that gathers local data, then in some examples broadcasts said data, and in some examples synthesizes one or a plurality of digital realities (as described elsewhere) and broadcasts, communicates and or records said synthesized digital reality(ies). In some examples a receiving TP device (such as an LTP or an MTP) receives, records and/or displays said RTP data which in some examples is by live streaming of actual reality or one or a plurality of digital realities that are synthesized by an RTP; and after reception said receiving TP device can process the RTP reception to synthesize different or additional digital realities that may or may not include additional live or recorded people; which may then be broadcast in some examples, recorded in some examples, shared within a focused connection in some examples, or utilized in any other known manner. In some examples said RTP data or re-processed TP data (herein received data) are received or intercepted on a network (in some examples by a server, in some examples by an application, in some examples by a service, or in some examples by another network means); and in some examples said network receiver processes said received data to synthesize different or additional digital realities that may or may not include additional live or recorded people; which may then be broadcast in some examples, recorded in some examples, communicated in some examples, or utilized in any other known manner (including transmitting said received and altered data as if it were the original RTP data or TP data from the original RTP or TP source). In some examples RTP processing is distributed between two or a plurality of RTP and/or TP devices and/or third-parties that are connected by means of one or a plurality of networks. In some examples RTP processing and/or synthesized digital realities are personalized to individual recipients; and in some examples RTP processing is personalized to groups of recipients. When personalized, synthesized digital realities enable different recipients to see differently processed and differently constructed video and audio including in some examples different advertisements, and some examples different people, in some examples different buildings with different logos and brand names, and in some examples other different components—therefore, in some examples digital reality is a constructed process that is based in part on who each recipient is and his or her interests, boundary settings, etc.

FIG. 40, “Digital Realities Construction/Resources”: In some examples resources are created, stored, retrieved and utilized for constructing digital realities; in some examples by copying the most popular and highest earning digital realities and/or components of digital realities; in some examples by providing means for creators of digital realities to access tools, templates and other resources to accelerate their construction; in some examples identifying the best sources for components to develop an improved new and better digital realities efficiently; and in some examples to provide users and customers with a prioritized list of the best digital realities. Said construction and resources process is flexible and modular so it can include new technologies, new vendors, new digital reality creators, etc. to accelerate the advancement and distribution of the best new digital realities constructs.

FIG. 41 and FIG. 42, “TP Devices' Digital Realities, Events, Broadcasts, Etc. and Revenues”: In some examples requests for digital realities are received and processed by a plurality of media, tools, resources, etc. In some examples said requestors may or may not be permitted to receive, join, share, etc. a specific digital reality based upon whether it is free, paid such as by purchasing a ticket for subscription, for group members only, or some other requirement. In some examples after acceptance a digital reality may be streamed or it may be customized for said recipient or device such as by blending in content, objects, etc. In some examples the receipt and use of the digital reality is validated and/or logged in order to provide revenue generating data such as reception, audience information, demographics, features used, etc. In some examples sponsor services enable sponsors to place advertising, marketing or direct selling within one or a plurality of digital realities, including in some examples logging the delivery of said sponsor data, In some examples logging and one or a plurality of databases records the utilization of said sponsor data by one or a plurality of recipients, and in some examples reports these data directly to the appropriate sponsors. In some examples logged and stored data is employed to provide digital reality creators with improved audience size, revenue and other opportunities information when constructing or editing digital realities—to enable the advancement of digital realities with greater growth and faster advances in the directions that produce the highest levels of interest, use, revenues, audiences, and other metrics. In some examples accounting systems invoice sponsors, receive sponsors payments, determine what to pay device owners and/or digital reality sources, make payments to sources and/or device owners, report individual data on individual accounts, and aggregate data so that individual comparisons may be made with various revenue and audience size opportunities, and perform other accounting functions. In some examples any of these steps may be provided by one or a plurality of third parties.

FIG. 43, “Integration with ARM Boundaries Settings (Choose Your “Realities”): In some examples based on experiencing and/or learning about one or a plurality of digital realities, in some examples an identity can edit and alter one of its ARM (Alternate Realities Machine) boundary(ies); in some examples it can add a digital reality and make it a priority, or modify an existing digital reality's priority level; in some examples it can filter a digital reality by blocking or excluding it, or modify its filter level; in some examples it can add or remove a digital reality, or its components to a paywall, to protection, or to other boundaries settings. By means of learning about digital realities and varying one's boundaries based on what each person does or does not want, one identity's digital reality(ies) may be considerably different than another person's or another identity's digital realities.

Turning now to FIG. 38, “RTP Processing,” in some examples an RTP 2044 (as described elsewhere) includes being remotely controlled by one or a plurality of controlling electronic devices 2041 2042 2043 (as described elsewhere) over one or a plurality of networks 2045 (as described elsewhere). In some examples RTP processes 2048 local content data gathered by said RTP 2044, including in some examples live video and audio of a place 2049, in some examples stored recordings of a place 2049, in some examples other local data gathered in real time or in recordings by said RTP's sensors 2049. In some examples RTP processing proceeds as described elsewhere (such as in FIG. 81 and elsewhere) to combine local content data with other content, persons, objects, events, advertising, etc. such that real-time replacements resulted in digitally modified places (with or without providing information that place has been modified). In some examples various parts of the foreground and/or background of said local content data may be replaced in whole or in part; and in some examples the RTP's local content data may be used to replace the foreground and/or background of a different place—again, with or without providing information that the local place and/or the different place have been digitally modified)—such that the constructed place may include components from one or more places, people, products, objects, buildings, advertising, etc. Furthermore, as described elsewhere “reality replacement” may be provided either by an individual's choice, as part of an educational class or an educational institution's presentation of itself, as a business service, as part of delivering an experience (such as at a theme park or any business), as part of constructing a brand's image, as part of a government's presentation of its services, etc.

FIG. 38 illustrates some examples for using an RTP to construct one or a plurality of digital realities (which is described in more detail elsewhere). In a sending option 2048 that includes constructing one or a plurality of digital realities, an RTP may gather local content data 2044 2049 (including in some examples live video and audio of a place 2049, in some examples stored recordings of a place 2049, in some examples other local data gathered in real time or in recordings by said RTP's sensors 2049); provide separation 2054 and replacement blending 2055 (which in some examples blends content from an LTP 2050, in some examples blends content from an AID/AOD 2050, in some examples blends content from a subsidiary device 2050, in some examples blends in parts of a designed or virtual place 2050, in some examples blends in components of a live or recorded SPLS connection 2050, in some examples blends in advertising 2052, in some examples blends in marketing 2052, in some examples blends in paid content 2052, in some examples blends in paid messaging 2052, in some examples blends in an altered reality 2051 that has been substituted at a source 2051 with or without providing information about said substitution, etc.); then stream it 2056 over one or a plurality of networks 2045 to others. In some examples the construction of one or a plurality of digital realities may in some examples be performed by other networked electronic devices such as in some examples Local Teleportals, in some examples Mobile Teleportals, in some examples network servers or applications, and in some examples other devices or means described elsewhere.

In a receiver(s) alteration option 2048 that includes constructing one or a plurality of digital realities, an RTP may gather local content data 2044 2049 (including in some examples live video and audio of a place 2049, in some examples stored recordings of a place 2049, in some examples other local data gathered in real time or in recordings by said RTP's sensors 2049); then stream it 2056 over one or a plurality of networks 2045 to others such as in some examples an LTP user 2041, in some examples an MTP user 2041, in some examples an AID/AOD user 2043, in some examples a TP subsidiary device user 2042, etc.; wherein one or a plurality of receivers' device(s) 2041 2042 2043 perform separation (such as 3621 in FIG. 81 and elsewhere) and replacement blending (3630 and elsewhere); then said receiver(s) 2041 2042 2043 stream their constructed digital reality(ies) over one or a plurality of networks 2045 to others.

In a network alteration option 2048 that includes constructing one or a plurality of digital realities, an RTP 2044 may gather local content data 2044 2049 (including in some examples live video and audio of a place 2049, in some examples stored recordings of a place 2049, in some examples other local data gathered in real time or in recordings by said RTP's sensors 2049); then stream it 2056 (without constructing a digital reality) over one or a plurality of networks 2045; wherein said RTP's 2044 2056 stream may be intercepted and a separate networked application, networked server and/or networked service may provide separation (such as 3621 in FIG. 81) and replacement blending (3630 and elsewhere); then said network application, server and/or service may stream its constructed digital reality(ies) over one or a plurality of networks 2045 to others.

Reconstructing and modifying digital realities: In a receiver(s) alteration option 2048 an RTP may construct one or a plurality of digital realities 2049 2054 2055 2050 2051 2052 2056 as described elsewhere, and stream it (them) over one or a plurality of networks 2056 2045; wherein one or a plurality of receivers' device(s) 2041 2042 2043 perform separation (such as 3621 in FIG. 81 and elsewhere) and replacement blending (3630 and elsewhere) to provide further alterations to said constructed digital reality(ies) that may include separation (such as 3621 in FIG. 81) and replacement blending (3630 and elsewhere); then said receiver(s) 2041 2042 2043 stream the reconstructed and modified digital reality(ies) over one or a plurality of networks 2045 to others. In a network alteration option 2048 an RTP may construct one or a plurality of digital realities 2049 2054 2055 2050 2051 2052 2056 as described elsewhere, and stream it (them) over one or a plurality of networks 2056 2045; wherein one or a plurality of said constructed reality(ies) stream(s) may be intercepted and a networked application, networked server and/or networked service may provide further alterations to said constructed digital reality(ies) that may include separation (such as 3621 in FIG. 81) and replacement blending (3630 and elsewhere); then said network application, server and/or service may stream the reconstructed and modified digital reality(ies) over one or a plurality of networks 2045 to others.

In some examples of a different kind of step, said constructed digital realities, and/or reconstructed and modified digital realities, may be substituted as a source 2051 (and 3627 in FIG. 81 and elsewhere) with or without providing information that said substitution has been made. In such a case, an expected “real” and live source may be replaced with an altered source 2051 3627 in some examples with clear and visible indication that said source has been transformed, but in some examples to provide a digitally altered reality as a hidden process without informing recipients of the transformation(s) and substitution(s).

In some examples an additional step is to apply RTP applications 2053 to said RTP streams 2056 and then publish said streams 2057 so that they may be found, enjoyed, used, etc. by others. In some examples said other applications 2053 include tagging with keywords 2053 2057, in some examples submitting streams 2056 to “finding” tools and services 2053 2057, in some examples submitting streams 2056 to “alerts services” 2053 2057, in some examples providing streams 2056 as broadcasts 2053 2057, in some examples recording streams 2053 2056 and scheduling said recordings 2053 2056 as scheduled broadcasts 2053 2057, etc. Similarly, the same types of applications may be applied to RTP streams that are processed by one or a plurality of receivers' device(s) 2041 2042 2043, and may also be applied to RTP streams that are processed by one or a plurality of separate networked application(s), networked server(s) and/or networked service(s). In some examples said other applications 2053 include known augmented reality applications that are not described herein; in some examples said other applications 2053 include known GPS location-aware services that are not described herein; in some examples said other applications 2053 include other types of services or applications that are not described herein.

In some examples said publishing 2057 may monetize both “live” RTP streams 2049 2056 and constructed digital realities 2044 2048 2049 2054 2050 2051 2052 2055 2056 (as described in FIG. 50 and elsewhere), there may be incentives to provide and deliver digital realities that are attractive, powerful and compelling for potentially wide use and enjoyment.

In some examples one or a plurality of RTPs 2044 2048 may each provide a plurality of “live” streams, streamed digital realities, and/or recorded “live” or digital realities. As a result said RTP 2044 2048 may not have sufficient resources to provide its component services and processing 2049 2044 2048 2049 2053 2054 2050 2051 2052 2055 2056; it may also have insufficient network bandwidth 2045 to deliver a plurality of simultaneous streams; it may also have insufficient capitalization to pay the equipment, maintenance and/or management costs of operation. With any of these or any other limiting factor(s) there is a need to focus said RTP's processing, bandwidth, management, etc. on its highest value operations.

In some examples a specific RTP application 2053 and/or a specific stream 2056 are initiated only when an appropriate audience or user presence indication 2058 is received 2053 2056. In some examples after an appropriate presence indication 2058 is received and the related RTP application 2053 or stream 2056 has been started 2058, said presence indication must be periodically renewed 2059 so that said application 2053 or stream 2056 are continued 2059. In some examples after an appropriate presence indication 2058 is received and the related RTP application 2053 or stream 2056 has been started 2058, said presence indication must be periodically renewed 2059 or else said application 2053 or stream 2056 timeout and are terminated 2059. In some examples said presence indication 2058 2059 is based upon ARTPM presence described elsewhere; in some examples said presence indication 2058 2059 is based upon any known presence technology, system, application, etc.

In some examples a plurality of RTP applications may run simultaneously 2053, and/or RTP “live” and constructed digital realities may be simultaneously streamed 2056, causing insufficient resources (as described elsewhere). In some examples an RTP application 2053 monitors and logs the total usage of each currently running RTP application 2053 (herein “Present Audience/Users 2058 2059”), and each current RTP stream 2056 (Present Audience/Users 2058 2059), to utilize said monitored data in allocating and prioritizing RTP resources 2044 2048 if and when they are insufficient. In some examples the utilization of said Present Audience/Users data 2058 2059 is pre-set based upon priorities such as the goals of the owner or manager (herein “owner”) of said RTP(s) 2044 2048. In some examples the RTP's owner's priority is audience size 2058 2059 so that if said RTP has insufficient resources the first application and/or stream to be terminated will be the one with the smallest size (e.g., the lowest number in the current Present Audience/Users data 2058 2059); and if additional applications and/or streams must be terminated that will be done based on a “lowest number of audience members or users first” model. In some examples the RTP's owner's priority is revenue and income so that if said RTP has insufficient resources the first application and/or stream to be terminated will be the one that produces the smallest revenues (e.g., anything given away free will be terminated first); and if additional applications and/or streams must be terminated that will be done based on a “least revenue produced first” model. In some examples the RTP's owner's priority is a combination of audience size (such as for growth) and revenues so that if said RTP has insufficient resources first the free applications will be terminated (e.g., the free applications that have the lowest number in the current Present Audience/Users data 2058 2059); and if additional applications and/or streams must be terminated that will be done based on a model such as “lowest number of audience members or users first,” then the smallest revenue producers next—until what is left includes the largest audiences (whether free or paid) with the streams and applications that produce the largest revenues.

RTP Processing Locations: Turning now to FIG. 39 in some examples one option is a sender 2064 which may be an RTP device as described elsewhere in more detail, or may be another type of Teleportal electronic device with sensors such as described elsewhere, or maybe another type of electronic device with sensors. In a brief summary said sensor(s) data is received 2065 2060 2067 (including in some examples live video and audio of a place 2060, in some examples stored recordings of a place 2060, in some examples other local data gathered in real time or from stored recordings by said sensors 2060); and in some examples includes data from a remote source(s) 2060 2061 2062 (including in some examples advertising 2061, in some examples PTR (Places, Tools, Resources) 2061, in some examples a virtual place[s] 2061, in some examples a digital reality substituted as a source 2061, etc.) which in some examples is received by said sending device 2064 directly 2061 2060 2065, and in some examples is received by said sending device 2064 over one or a plurality of networks 2061 2062 2067 2065. Then in some examples separation 2066, blending 2066, replacements 2066, rendering 2066, encoding 2067, etc. are performed by said sender's device 2064; and the constructed output is streamed 2067 and/or transmitted 2067 over one or a plurality of networks 2062 to others, as well as (optionally) being displayed 2066 for said sender 2064. In some examples “live” source data from an RTP's sensors is streamed as received without further processing and the output is streamed 2067 and/or transmitted 2067 over one or a plurality of networks 2062 to others, as well as (optionally) being displayed 2066 for said sender 2064. In some examples the output 2066 (whether as received or after alteration[s]) receives processing from additional applications such as in some examples augmented reality, in some examples GPS location-aware data, etc. and the final output with additions is streamed 2067 and/or transmitted 2067 over one or a plurality of networks 2062 to others, as well as (optionally) being displayed 2066 with said additions for said sender 2064.

In some examples another option is a recipient 2068 as described elsewhere in more detail, but in a brief summary one or a plurality of sources 2064 2060 2072 2061 are received 2069 2070 (including in some examples live video and audio of a place 2060, in some examples stored recordings of a place 2060, in some examples other local data gathered in real time or from stored recordings by sensors 2060; in some examples includes advertising 2061, in some examples PTR (Places, Tools, Resources) 2061, in some examples a virtual place[s] 2061, in some examples a digital reality substituted as a source 2061, etc.) which in some examples is received by said recipient 2068 over one or a plurality of networks 2064 2060 2072 2061 2062 2069 2070. In some examples one or a plurality of sources 2070 are displayed 2071 and used as received. In some examples separation 2071, blending 2071, replacements 2071, rendering 2071, encoding 2071, etc. are performed by said recipient's device 2068 and the constructed output 2071 is displayed 2071 and used. In some examples the output 2071 (whether as received or after alteration[s]) receives processing from additional applications such as in some examples augmented reality, in some examples GPS location-aware data, etc. and the final output with additions is streamed 2069 and/or transmitted 2069 over one or a plurality of networks 2062 to others, as well as (optionally) being displayed 2071 with said additions for said recipient 2068. In some examples the displayed output 2071 (whether as received or after alteration[s]) is streamed 2069 and/or transmitted 2069 over one or a plurality of networks 2062 to others.

In some examples another option is a network alteration 2072 as described elsewhere in more detail, but in a brief summary one or a plurality of sources 2064 2068 2060 2061 are received 2073 by a separate networked application, networked server and/or networked service; in some examples one or a plurality of sources 2064 2068 2060 2061 are intercepted 2073 with or without notification by a separate networked application, networked server and/or networked service. In some examples (whether said sources are received or intercepted) one or a plurality of steps such as decompression 2074, decoding 2074, separation 2075, blending 2075, replacements 2075, rendering 2075, encoding 2076, compression 2076, etc. are performed by said network application, server and/or service 2072 to produce constructed output 2076. In some examples said constructed output 2076 receives processing from additional applications such as in some examples augmented reality, in some examples GPS location-aware data, etc. In some examples said constructed output 2076 is streamed 2077 and/or transmitted 2077 over one or a plurality of networks 2062 to others. In some examples various types of network alterations 2072 may be performed for a plurality of reasons such as in some examples inserting paid advertising in a stream or background 2072, providing the same shared location appearance and/or content for all recipients such as at a demonstration or presentation 2072, to substitute an altered reality at a source 2072 2061, etc.

In some examples other options include one or a plurality of users' profile records 2078 such as in some examples for personalization 2078; in some examples to retrieve and utilize an identity's boundaries 2078 (including in some examples retrieving a user's priorities to include them in replacements 2066 2071 2075 and/or in display[s] 2066 2071 2075, in some examples retrieving advertisements 2061 that fit a user's Paywalls and displaying them for earning income, etc.); in some examples to include governance attributes 2078, governance sources 2078, governance criteria 2078, etc.; or in some examples for other purposes appropriate for a user's profile 2078 or records 2078.

Digital Realities Construction Resources and Advancement Processes: FIG. 40. “Digital Realities Construction Resources,” illustrates processes of (1) in some examples creating new resources for digital realities construction; (2) in some examples constructing digital realities by copying the most popular and highest earning one(s); (3) in some examples providing means for creators of digital realities to quickly access tools, templates and other resources for constructing and implementing them rapidly; (4) in some examples quickly identifying and using the best digital realities as sources when constructing new digital realities, to learn from them and advance to newer and better digital realities at a faster pace—essentially, making it possible to develop and improve new and better digital realities efficiently; (5) in some examples providing users with consistent and predictable digital realities from a plurality of RTP sending sources, from a plurality of TP devices sources, from a plurality of network alteration sources, and from a plurality of other sources; etc. FIG. 40 illustrates how said processes are flexible, modular and consistent yet able to evolve to include new technologies, new vendors, and new digital reality creators so that a growing range of digital realities may be implemented—with a minimum of construction effort—so that numerous types of new digital realities may be created, added and streamed by both vendors and users.

In some examples a core process of the “Digital Realities Construction Resources” is to provide consistent high-level patterns 2081 2090, yet within each pattern provide easily added and potentially large improvements 2082 2096 2103 2104 in the ways digital realities are able to be constructed 2090 2091 2084. The sources of said improvements may be TPU (Teleportal Utility) Services 2097; TPU Applications 2098; large industry-leading vendors 2099 2100; new technology startups 2099 2100; various digital reality sources 2101; one or a plurality of RTP owners 2102, individual users 2102, digital reality audience members 2102, etc. The architecture provides capabilities so that each addition 2096 may be included 2103 2104 in one or a plurality of repositories 2090 and provided by one or a plurality of selection and delivery services 2091 (such as in some examples for selecting a type of digital reality 2091, in some examples for selecting and applying various elements of digital reality[ies] 2091 2090, and in some examples for selecting and applying elements so as to create new combinations and new digital realities 2091 2090) so that developers of new digital realities may use them to construct new digital realities 2084, or to modify or update existing digital realities 2088. This provides continuous improvement opportunities for digital realities to potentially become an accelerated creation of intuitive, rapidly maturing, increasingly familiar and stable digital realities that may be created and/or delivered by a plurality of types of devices, and used by growing audiences 2087 2106 2107 2108 who independently choose and enjoy the types of digital realities they prefer. Since audiences are valuable and can be monetized 2107 2108, the metrics and data on different digital realities 2087 2106 produces rankings that surface the most valuable digital realities 2107, and said rankings 2107 may be used when storing and selecting digital realities 2090 2091, and storing and selecting elements of digital realities 2090 2091—so that new and updated digital realities 2084 2088 may produce larger audiences 2087 2106 2107 2108 and larger incomes 2107.

In some examples said digital realities construction 2080 begins by logging in to a TP device as a specific identity 2083 or user 2083 and starting the creation of a new digital reality by running a setup application 2083 such as in some examples a wizard 2083 and in some examples a software program 2083. Said setup application 2083 determines if the DIU (Device In Use) has constructed other digital realities by means of their stored profile(s) 2092 and attributes 2092. If that is true, then said setup 2083 utilizes said previous digital realities settings 2092 as the default selections for creating a new digital reality, which includes said DIU's capabilities for constructing and delivering digital realities. If said DIU does not have other digital realities 2092, then said setup 2083 retrieves appropriate digital realities settings from appropriate virtualize repositories 2081 2091 2090 to provide an initial setup 2083. User may then edit said DIU's selection(s) 2091, element(s) 2091, etc. 2084.

In some examples said user then selects an appropriate type of digital reality 2091, and desired elements from virtual repositories 2091 by means of one or a plurality of selection and delivery services 2091. In some examples said selections 2091 include types of digital realities 2090, in some examples templates (layouts) 2090, in some examples designs (appearance) 2090, in some examples patterns (functions) 2090, in some examples in some examples portlets (components) 2090, in some examples widget (components) 2090, in some examples servlets (components) 2090, in some examples applications (software) 2090, in some examples features (such as alerts, sensors, services, etc.) 2090, in some examples APIs 2090, etc. In some examples after said selections have been made 2091 2090 and are displayed 2084, they are edited such as by choosing, arranging and editing said elements manually and individually 2084, and in some examples by one or a plurality of tools 2084 2096 2103 2104 2090 2091. In some examples after editing said selections 2084 a digital reality is confirmed by viewing and finished 2085 which includes saving them in the local device 2092, or in some examples saving them in an appropriate remote storage 2093 such as on the TP Network 2093. A specification of the digital reality's attributes and components is also saved 2092 2093 to provide (optional) default selections when another new digital reality is created 2083 for that device 2080 in the future. Alternatively, said digital reality's attributes and components 2092 2093 may provide its settings and attributes if that user or other users have similarly capable TP devices, so that this digital reality (such as its template, appearance, components, functions, settings, etc.) may be duplicated on a new TP device. In some examples when said digital reality is complete 2085 it can be tagged 2086 and published directly 2086 2108, or in some examples by means of data logging and a service that identifies the most knowledgeable digital realities 2106 2107 2108, such as described in FIG. 50 and FIG. 87 and elsewhere.

In some examples when said constructed digital reality(ies) 2085 are used 2087 data is captured as described elsewhere and stored 2106 such as in some examples to a metered data database 2106 that may include in some examples logging of streams, in some examples audience size data, in some examples audience demographics data, in some examples audience profile data, in some examples users' individual identification data, etc. If one or a plurality of these audience data are captured 2087 and recorded 2106 (such as which digital reality was used, audience data, each successfully metered revenue producing event associated with said digital reality, and [optionally] which user employed each event) then said metered data 2106 may be accessed and applied by a TP Digital Realities Broadcast Selections and Revenue(s) Generation Service 2107. Since audiences are valuable and can be monetized 2107 2108, the metrics and data on individual digital realities 2087 2106 may be employed in a range of known methods, systems, or applications to produce various types of revenues and income from the streaming and/or transmission of said digital realities, from advertising, from subscriptions, from memberships, from event tickets, or from other revenue sources.

In some examples when said digital reality is complete 2085 if needed or desired it may be modified 2083, edited 2083, updated 2083, or ended 2083 by means of the process described previously for selecting 2084 and editing 2084 a digital reality or its elements 2084 such as its template 2090, components 2090, features 2090, etc. This may be done as a normal part of updating or ending a digital reality because various elements 2090 associated with said digital reality may be updated, replaced or terminated from time to time. In addition, a differently designed or configured digital reality may produce larger audiences 2087 2106 2107, higher revenues 2107, etc. so that it may be advantageous to modify 2088 some part(s) of a digital reality or its elements.

In some examples the use of one or a plurality of digital realities 2087 may lead to new ideas in some examples by RTP owners 2102, in some examples by vendors 2102, in some examples by users of one or a plurality of digital realities 2102, in some examples by a digital reality's audience 2102, or in some examples by others who know of one or a plurality of digital realities. Said new ideas may include in some examples new types of digital realities 2089, in some examples improved elements 2089 2090 of digital realities, in some examples improved digital reality features 2089 2090, in some examples improved digital reality publishing 2086 2108, in some examples for introducing a new type of digital reality(ies), in some examples improved promotion or marketing opportunities 2087 2106 2107 2108, in some examples improved monetization or revenue generation methods or applications 2087 2106 2107 2108, in some examples new combinations of existing and new ideas into a new capability(ies) that may be delivered repetitively 2090 2091, in some examples other types of new ideas. In some examples said new ideas 2089 may be developed 2102 2096 2103 2104 2090 2091 as described elsewhere.

In some examples a related process is the creation 2082 and development 2082 of new digital realities, elements, tools, features and capabilities by a variety of sources that may include in some examples TPU Services 2097 and TPU Applications 2098 (Teleportal Utility Services and/or Applications may develop and deliver new types of digital realities 2090, or new digital realities elements 2090 that may be incorporated into realities construction tools 2103 2104, or saved directly to one or a plurality of repositories 2090, for selection and use 2084 in the construction of digital realities); in some examples Third-Party TP Vendors 2099 and/or Third Party TP Services 2100 (whether large industry-leading corporations or new small business startups, vendors of products or services may develop and deliver new digital realities elements 2090 that may be incorporated into realities construction tools 2103 2104, or saved directly to one or a plurality of repositories 2090, for selection and use 2084 in the construction of digital realities); in some examples other sources of elements 2101 (which may be adapted from standards-based components such as portlets, servlets, widgets, small applications, etc. that may in some examples be accessed by realities construction tools, and in some examples may be added to a virtual repository 2090); in some examples digital realities users 2102, audience members 2102, RTP owners who provide one or a plurality of digital realities 2102, or others may provide new ideas 2089 (such as for new types of digital realities, new features, new services, new revenues opportunities, etc.). These digital realities development improvements 2096 may be delivered to other digital realities creators 2084 by means previously described (the process for selecting and editing realities, components and features 2084; by means of a selection/delivery service for realities, components, etc. 2091; by means of a virtual repository[ies] 2090; etc.).

In some examples another related process is the TP Digital Realities Broadcasts Selections and Revenue(s) Generation Service 2107 which includes means for identifying and presenting the most popular and most used digital realities 2087 2106, and (optionally, where metered and logged) components and features of said digital realities 2087 2106, and (optionally, where metered and logged) be absolute or relative magnitude of revenues generated by various types of digital realities 2087 2106 or their components and features 2087 2106. Said data 2106 2107 may be provided in various ways such as in some examples statistics 2107, in some examples graphical visual illustrations 2107, in some examples best practices 2107; and in some examples said data 2106 2107 may be provided directly to said development tools 2103, in some examples may be provided during the use 2084 of a Selection/Delivery Service for Realities, Components, etc. 2091, and in some examples may be associated with the choice or use of individual elements from a virtual repository(ies) 2090. In some examples in each tool 2103, selection service 2091, repository 2090, etc. the types of digital realities or elements may be sorted so the first ones displayed are those that produce the most success 2087 2106 2107, and the last displayed are those that produce the least success 2087 2106 2107. As a result, providers of digital realities 2080 may improve their selection of resources 2081, and further development of continually advancing digital realities 2082, and publishing of their digital realities 2108, so that digital realities simultaneously provide the greatest benefits to both their providers and their users/audiences.

In some examples combinations may be provided for remote access and use such as providing one or a plurality of RTPs as a an externally controlled device(s) or service(s) so that others may construct digital realities 2083 2084 2091 2085 2086 2087 2106 2107 2108 2088 2089 2102 and deliver said digital realities 2087 for various audiences 2106 2107 with revenue sharing and income when audiences are monetized 2107 2108 by those additional digital realities creators. In such a case, users from a plurality of locations may create and stream one or a plurality of digital realities that have access to said RTP's plurality of sensors and sources (as described elsewhere). To accomplish this, and to provide this functionality as a capability of RTPs owned and provided by one or a plurality of corporate and/or individual owners, said owners may combine an RTP with TP sharing (as described elsewhere), or with RCTP (Remote Control Teleportaling), and also with digital realities creation tools 2082 2096 2103 2104, sources (as described elsewhere), and resources 2090—then publish this as a complete RTP remote digital realities broadcast resource 2090 2091 for shared creation and use. With these types of resulting devices and capabilities in one or a plurality of digital realities selection services 2091, remote users may access said RTPs to create multiple digital realities 2083 2084 2091 2085 to publish and attract audiences 2087 2106, so that those audiences may be monetized 2107 2108 and the resulting revenues shared.

When considering an overall view of Digital Realities Construction Resources, this is a substantial departure from typical product development which usually provides a static product design that remains fixed and is updated only periodically (such as every couple of years). In contrast, these methods and processes support self-determined improvement and advances processes that provide data on what is most successful and least successful to guide the creation and delivery of the best and most attractive digital realities—continuously by one or a plurality of creators, without waiting for slow cycles of periodic updates.

In some examples there are incentives to provide more successful digital realities such as in some examples revenues and earnings, in some examples larger audiences, in some examples ticket sales, in some examples additional registrations, in some examples additional subscriptions, in some examples additional memberships, in some examples sufficient utilization to support continued provision of one or a plurality of digital realities that people want and choose, in some examples the opportunity to develop and advance new features for digital realities, in some examples the opportunity to add new capabilities within digital realities, in some examples the opportunity to explore new or interesting ways to live, in some examples the opportunity to experiment with new state(s) of reality or ways to express reality, in some examples the ability to consider and perhaps redefine the human condition from new perspectives, etc.

Turning now to FIG. 41, “TP Devices' Digital Realities, Events, Broadcasts, Etc. and Revenues,” one or a plurality of requests for a digital reality(ies) is received 2110 from one or a plurality of sources such as described elsewhere (such as in FIG. 87 which describes a current events, places and constructed digital realities media that includes searches, lists, applications, services, portals, dashboards, events, alerts, subscriptions, directories, profiles, and other sources). Said request(s) 2110 is received by a source that provides a requested digital reality, or provides access to a plurality of digital realities; and requestors in some examples may be an LTP(s) 2112, in some examples may be an MTP(s) 2112, in some examples may be an RTP(s) 2112, in some examples may be a TP subsidiary device(s) 2112, in some examples may be an AID(s)/AOD(s) 2112, in some examples may be a TP network device(s) 2113, and in some examples may be another type of networked electronic device(s).

Being permitted to join a focused connection 2121 in response to a request 2110 is described elsewhere in more detail (such as in attending a free, paid or restricted event in FIG. 87 and elsewhere), and said connection is defined herein as an “event,” which includes live or recorded streams such as events, places and constructed digital realities. In a brief summary in some examples said request(s) to enable a focused connection 2116 do not require payment 2117 nor have any restriction 2118 so that a focused connection 2121 is opened in response to said request; and (optionally) said requestor may join the SPLS for that connection such as for that event, place, digital reality, group, etc. In some examples said request(s) require acceptance to enable a focused connection 2116 because said “event” is not free 2117 or is restricted 2118 in which case it may require purchase of a ticket 2119, making a payment 2119, paying a fee 2119, registration 2119, subscription 2119, membership 2119, etc. If that is the case, then in some examples a user may submit a code 2122, credential 2122, ticket 2122, membership 2122, authorized identity 2122, subscription code or credential 2122, etc. and if not accepted 2123 or not authorized 2123, said user may be denied the requested connection 2123. In some examples, however, acceptance 2124 or authorization 2124 is granted and a focused connection 2121 is opened in response to said request; and (optionally) said requestor may join the SPLS for that connection such as for that event, place, digital reality, group, etc.

Delivering a stream 2126 2130 in a connection such as 2121 2116 is described elsewhere in more detail. In a brief summary the recipient's identity 2127 is determined along with the recipient's current DIU (Device In Use) 2127, and (optionally) in some examples customize a new stream 2128 for said recipient 2127 or device 2127 such as by (optionally) blending in one or a plurality of advertisements 2129, links to related content 2129, marketing messages 2129, sponsor's content 2129, etc. as described elsewhere. If a stream is customized 2128 2129 sources for said customization 2138 such as sponsor ads, sponsor messages, sponsor links, sponsor marketing, etc. may be retrieved from sponsor services 2144 2145 2149. Whether a standard stream 2121 2126 2130 or a customized stream 2121 2126 2127 2128 2129 2130 is provided, said stream 2130 is logged 2131 along with (optionally) logging data such as audience size 2131, demographics 2131, special features or interactive capabilities used 2131, identities 2131, other relevant usage data 2131, etc. In some examples said logged and stored raw data 2131 2132 2133 may include revenue-related data 2132 such as users' receipt of ads or marketing messages 2132, users' actions that result from advertising or marketing 2132 (ranging from immediate purchases to linking to bookmarking to additions to wish lists to other relevant behaviors), audience member types (if some types of audiences have higher value than others), audience member locations (if audiences in some countries, cities or neighborhoods have higher value than others), date and time used (if some days and times have higher value than others), identity (if some specific individuals have higher value than others), etc. In some examples said logged and stored raw data 2131 may include audience data 2133 such as audience size 2133, audience demographics 2133, various audience behaviors or interactions that are non-revenue producing (e.g., don't involve advertising, marketing, sales, etc.), and other types of audience data that may be tracked for a variety of purposes.

In some examples a connection 2130 includes validating reception 2134 of said stream 2130 to confirm that certain logged data 2131 is as valid as possible. In some examples validation 2134 is by receiving a response from the receiving device 2135 and the appropriate data is logged 2131; in some examples validation 2134 is by receiving a response from the recipient user 2135 and the appropriate data is logged 2131; in some examples of validation 2134 is provided by other means such as by attention tracking, eye tracking, interactions with said stream, etc. (as described in FIG. 119 and elsewhere) and the appropriate data is logged 2131. In some examples if said validation 2134 is unsuccessful 2135, said stream may be managed by an error correction/improvement service 2136 (as described elsewhere; and additionally, may serve as a new trigger for an AKM [Active Knowledge Machine] request as described elsewhere).

In some examples streams 2121 are customized 2128 for one or a plurality of recipients 2127 by blending in sponsor messages, marketing, advertising, video (including audio), images, or other commercial information 2129 that are received from one or a plurality of sponsor services 2138 2145 2149 2144. Said customization 2128 includes determining the one or a plurality of receiving devices 2127 and/or the identity(ies) of one or a plurality of recipients 2127, selecting the appropriate commercial messages for said device(s) and/or recipient(s), blending said stream(s) 2129 as described elsewhere, transmitting said blended stream 2130, and logging the appropriate resulting data 2131 2132 2133 (including in some examples validation of delivery or reception 2134 2135 2131).

Sponsor services provide various systems, processes, methods and other means that generate revenues, one of which may include sponsor services 2145. In some examples said sponsor services 2145 include sponsor selection 2146 such as by sale 2146, auction(s) 2146, etc.; the entry of deliverable messages by the sponsors selected 2147 which may include messages 2147, marketing 2147, advertising 2147, video (including audio) 2147, images 2147, sponsor's content 2147, or other commercial information 2147; and the storage of said messages for retrieval 2148, which may (optionally) include categorized areas such as by types of products or services 2147 2148 (such as for examples automobiles or trucks in transportation 2147 2148, fast food or beverages in food 2147 2148, smart phones or mobile phone services in communications 2147 2148, etc.); in some examples the retrieval of sponsor's video 2149 messages 2149, advertisements 2149, marketing messages 2149, commercial links 2149, etc. such as by categories 2147 as described elsewhere, or (optionally) by individually named competing products 2149 (such as for examples Toyota in automobiles 2149, Nikon in cameras, McDonald's in fast food, AT&T in mobile phone services, etc.); in some examples said sponsors messages retrieved 2149 for blending 2129 and streamed delivery 2130 may be recorded in one or a plurality of systems such as an accounting system 2158, logging system, or other billing and payment system 2158 as described elsewhere.

In some examples said logged revenues data 2131 2132, audience data 2131 2133, and other types of logging that counts and records data about streams, connections, events, digital realities, receptions, audiences, users, identities, broadcasts, etc. may be accessed 2139 2154 2155 such as by sorting 2155, filtering 2155, ranking 2155, extracting 2155, etc. and stored 2156 for a plurality of uses 2160 2161 2162. In some examples said uses include standard or customized dashboards 2160, or standard or customized reports 2160, which utilizes said logged data 2131 2132 2133 2139 2154 2155 2156 for one or a plurality of users such as such as sources 2111 2116 2160, recipients 2110 2121 2126 2160, sponsors 2145 2160 (such as advertisers, marketers, vendors, etc.), device vendors 2160, various types of customers 2160, etc.; and may (optionally) provide data for one or a plurality of services such as a PlanetCentral(s) 2160, a GoPort(s) 2160, an alert(s) 2160, an event(s) 2160, a digital reality(ies) 2160, a report(s) 2160, a dashboard(s) 2160, accounting systems 2158 that utilize ranked data 2156 and raw data 2132 2133, business systems that employ said data 2156, and other external applications that employ said data 2156. In addition, Web and other requests 2161 may provide answers to custom information questions to said users (as described in 2160) and said services (as described in 2160).

In some examples said logged and stored data 2132 2133 2156 is used to provide ranked revenue opportunities 2162 for improved decision-making when constructing digital realities 2162, broadcasts 2162, services 2162, various types of devices 2162, new features when the existing devices are updated and re-launched 2162, and many other types of decisions relating to a growing digital reality (as described elsewhere). In some examples said ranked data 2156 is utilized by a TP digital realities broadcasts, events and revenue(s) generation process, method, system, etc. 2107 as described in FIG. 41 and FIG. 42 and elsewhere. In some examples said ranked data 2156 is utilized to determine revenue producing opportunities for devices such as Teleportals, in some examples said ranked data 2156 is utilized to determine audience generation opportunities, and in some examples said ranked data 2156 is utilized to determine other growth opportunities. As a result, one or a plurality of said digital realities, said broadcasts, said events (or types of events), said services, said devices, etc. may evolve as an ecosystem environment where evidence of visible results produces indicators that lead to greater growth and faster advances in the directions that produce the highest levels of interest 2162, adoption 2162, use 2162, revenues 2162, audiences 2162, and other logged metrics that indicate success 2162.

In some examples accounting systems 2158 (such as described in more detail elsewhere, but described here in a brief summary, as well as having some examples of specific features called) collect revenues 2158 by accessing logged data 2156 2132 2133 that may be used for accounting and billing to invoice sponsors 2150 and receive their payments 2152. In some examples sponsors are invoiced for advertisements 2150; in some examples sponsors are invoiced for marketing messages 2150; in some examples sponsors are invoiced for product placements that are digitally blended into streams 2150; in some examples sponsors are invoiced for brand placements that are digitally blended into streams 2150; in some examples sponsors are invoiced for marketing information delivered within streams 2150; in some examples sponsors are invoiced for links displayed (such as to make an online purchase, see an item in an online store, add an item to a wish list, or any other e-commerce action) 2150; in some examples sponsors are invoiced for any e-commerce link(s) used 2150; etc. In some examples said accounting system(s) provides said accounting data to third parties' billing systems 2158 to invoice sponsors 2150 and receive payment 2152; in some examples said accounting data is utilized for direct invoicing of sponsors 2158 2150 and receiving payment 2152; in some examples one or a plurality of said sponsors 2146 2147 maintain a financial account that includes deposited monies, and said invoices 2158 2150 automatically bill said sponsor's depository account and receive payment 2152 in one electronic step 2150 2152; in some examples one or a plurality of said sponsors 2146 2147 maintain an electronic payment instrument in their financial account (such as in some examples a credit card, in some examples automated payments by a bank account, in some examples automated payments by a third-party payment service, etc.) and said invoices 2158 2150 automatically invoice said sponsor's financial account and receive payment 2152 in one electronic step 2150 2152 by means of said electronic payment instrument; in some examples one or a plurality of said sponsors 2146 2147 receives said invoice(s) 2150 and makes a separate payment(s) 2152.

In some examples accounting systems 2158 pay sources 2164 2165 2111 2112 2113, owners of TP devices who provide sources 2164 2165 2111 2112 2113, etc. (herein collectively referred to as “sources”) when monies are invoiced 2150 and received 2152 from sponsors 2145. In some examples one or a plurality of sources are paid for any means by which they monetize their audience(s) 2110 2116 and deliver streams to them 2121 2126. In some examples one or a plurality of sources are paid for delivering advertisements 2129 2150; in some examples sources are paid for marketing messages 2129 2150; in some examples sources are paid for product placements that are digitally blended into streams 2129 2150; in some examples sources are paid for brand placements that are digitally blended into streams 2129 2150; in some examples sources are paid for marketing information delivered within streams 2129 2150; in some examples sources are paid for links displayed (such as to make an online purchase, see an item in an online store, add an item to a wish list, or any other e-commerce action) 2129 2150; in some examples sources are paid for any e-commerce link(s) used 2129 2150; etc. In some examples one or a plurality of sources are paid due to a recipient's buying a ticket 2119 2120 to access said source; in some examples sources are paid due to a recipient's making a payment 2119 2120 to access said source; in some examples sources are paid due to a recipient's paying a fee 2119 2120 to access said source; in some examples sources are paid due to a recipient's registering 2119 2120 to access said source; in some examples sources are paid due to a recipient's subscribing 2119 2120 to access said source; in some examples sources are paid due to a recipient's joining or becoming a member 2119 2120 to access said source; etc. In some examples said payments to one or a plurality of sources 2165 are made from the direct invoicing of sponsors 2158 2150 and receiving their payment(s) 2152; in some examples said payments to one or a plurality of sources 2165 are received from third parties' billing and payment systems 2158 wherein said third parties invoice one or a plurality of sponsors 2150, receive one or a plurality of sponsors' payment(s) 2152, and pay said sources 2165.

In some examples sources 2166 (which include TP device owners, companies, broadcasters, and other types of sources) utilize data to determine their best opportunities to increase revenues 2166 2167, audiences 2166 2167 or other success indicators and metrics 2166 2167. In some examples sources utilize logged data 2131 2132 2133 2155 2156; in some examples sources utilize accounting data 2158; in some examples sources utilize ranked growth opportunities 2162; in some examples sources utilize ranked revenue opportunities 2162; in some examples sources utilize ranked audience increase opportunities 2162. In some examples sources utilize one or a plurality of types of market information sources such as in some examples recipients' groups and associations, in some examples market research services, in some examples prepackaged market studies, in some examples device vendor associations, in some examples industry groups, etc. In some examples sources may (optionally) receive aggregate data or subsets of data from one or a plurality of services such as a PlanetCentral(s) 2160, in some examples a GoPort(s) 2160, in some examples an alert(s) service(s) 2160, in some examples a digital event(s) service(s) 2160, in some examples a digital reality(ies) search engine 2160, in some examples an online analytics and reporting service 2160, in some examples an online dashboard(s) service(s) 2160, in some examples a behavior tracking and ad serving service 2160, in some examples an accounting system(s) 2160. In some examples sources may (optionally) receive data from one or a plurality of third-party business systems, or in some examples another external application(s) that logs and/or utilizes said types of data.

In some examples said data is used to determine which types of digital realities to create 2167; in some examples said data is used to determine new trends of emerging types of digital realities 2167; in some examples said data is used to determine digital realities with higher revenues and earnings 2167; in some examples said data is used to determine how to increase audience size 2167; in some examples said data is used to determine how to increase ticket sales 2167; in some examples said data is used to determine how to increase registrations 2167; in some examples said data is used to determine how to increase subscriptions 2167; in some examples said data is used to determine how to increase memberships 2167; in some examples said data is used to determine which of a set of provided digital realities are most preferred and used by their audiences 2167; in some examples said data is used to determine how to develop and obtain feedback on new features for digital realities 2167; in some examples said data is used to determine how to develop and obtain feedback on new capabilities within digital realities 2167; in some examples said data is used to determine which opportunities should be explored to find new or more interesting ways to live digitally 2167; in some examples said data is used to determine new ways to experiment with various interactive options for digital reality 2167; in some examples said data is used to determine the ability to consider the human condition from new perspectives 2167; etc.

Integration with ARM Boundaries Settings (Choose Your “Reality[ies]”): The Alternate Realities Machine (herein ARM) is described elsewhere in detail, but in some examples it provides ARM Boundary Management that provides recipients with greater control over their digital and physical space within the larger shared physical reality—in some examples an ARM provides means to reverse parts of the control over the common shared reality from top-down to bottom-up. As illustrated in some examples (such as in FIG. 115) an ARM includes filters/priorities so that recipients can determine what each wants to include and exclude; in some examples it includes digital and physical self-chosen personal protections for individuals, households, groups, and the public; in some examples it includes Paywalls so that individuals may earn money from providing their attention, rather than giving it away for free to those who sell it to advertisers. The result is personally controlled Shared Planetary Living Spaces (herein SPLS's) that have some parallels to how DVR's (Digital Video Recorders) are used to control hundreds of television channels—we record the television shows we want to see, play and watch what we prefer, and skip what we don't want.

Therefore, in various examples one or a plurality of SPLS boundaries are made explicit and manageable by said ARM. Within a particular set of Boundary Settings one's digital reality may be considerably different than someone else's. In addition, the ARM includes means to save, distribute and try out new Boundaries Settings so the most desirable alternate realities may rapidly spread and be tried, personally altered and adopted wherever they are preferred. As a result, the best alternate realities may be tried and applied with this scope and Seattle that the best realities deserve—possibly providing multiple better competitors than the common shared reality. In some examples the “best” Boundary Settings may be designed, marketed, sold and/or supported by individuals, corporations, governances, interest groups, organizations, etc. to improve the lives and experiences of those who live in their Shared Planetary Living Spaces.

Finally, in some examples a person has multiple identities (as described elsewhere in more detail) and each identity may have its own one or a plurality of SPLS's (as described elsewhere in more detail), and each SPLS may have one or a plurality of ARM Boundary Settings. In other words, in some examples by switching to a different established identity (as described elsewhere), a person immediately changes their SPLS(s) and ARM boundaries the new “reality” and is thereby able to experience and enjoy life differently. If a person has a plurality of identities, they may change their ARM boundaries to fit their SPLS's and ARM boundaries in each different identity. As a result, one person may change how reality is presented to them (and therefore perceived by them) as often as they want. The implication is that for one or a plurality of persons, reality can be put under their personal control—rather than the other way around.

Turning now to FIG. 43, “Integration with ARM Boundaries Settings (Choose Your ‘Reality[ies]’),” illustrates some examples of the above ARM processes which begin in some examples with RTP digital realities 2171 as described elsewhere; in some examples with digital sources 2171 as described elsewhere; in some examples with a broadcasted stream 2171 as described elsewhere; in some examples with governances 2171 as described elsewhere; etc. In some examples this also begins with a person's ARM boundaries settings 2172; and in some examples this begins with an identity's ARM boundaries settings 2172 (in which case an individual has one or a plurality of identities); and said person or identity has one or a plurality of ARM boundary settings.

In some examples after experiencing a source such as a digital reality 2171, a broadcasted stream 2171, a component of a governance 2171, or another type of source 2171, said identity 2172 may optionally choose to modify an ARM boundary for that source 2175. In some examples ARM boundaries (as described elsewhere in more detail) include priorities/exclusions 2175, a Paywall 2175, protection 2175, etc. In a brief summary a subset of said ARM boundaries are illustrated, namely the optional ARM boundary setting for prioritizing 2176 or excluding 2176 the source 2171 that was experienced. In a similar manner, the experience of any source 2171 may be utilized to modify any appropriate ARM boundary setting 2175 for a person 2172 or for one of said person's identities 2172.

In some examples the modification of said ARM boundary 2176 begins by deciding whether or not to apply a known ARM boundary 2177 that is based on said source 2171; in some examples a source 2171 is tried because it is new and popular so there may be an associated ARM boundary setting to rapidly include and prioritize said popular new source 2171; in some examples a source 2171 is tried because it may seem interesting but some of those who tried it may have disliked it so there may be one or a plurality of associated ARM boundary settings to exclude said source 2171, or to provide partial blocking of that source 2171. In some examples a source 2171 may belong to a category such as rock music stars, urban crimes in progress, new technology product launches, or any other category that a person may want to raise or diminish in importance. In some examples where there is an existing priority boundary and/or exclusion boundary for a category 2178 (rather than a specific source) it can be selected 2178 and adapted 2178 by increasing or decreasing that category's priority as described elsewhere. Said existing priority boundary(ies) 2178 and/or exclusion boundary(ies) 2178 is retrieved from one or a plurality of existing priority/filters databases 2179, displayed for selection 2178, and either used 2177 or not used 2177; then, if selected and used it may be adapted to fit the user's preferences 2178.

In some examples an existing boundary 2177 is not used and an ARM boundary setting may be created and set 2180 2182 2184 2186. In some examples said source 2171 may be added to priorities 2180 by adding it at a top priority 2181 or setting its priority level 2181 2188; in some examples said source 2171 may be added as an exclusion 2182 by adding it as completely blocked 2183 or setting its priority level 2183 2188. In some examples said source 2171 is already part of an ARM boundary so that it may have been part of that identities experience because that ARM boundary did not block it, made it a slight priority, or included it as a top priority; so in some examples a user would want to modify the ARM boundary that affects said source 2171—if the experience was superior then the priority level of said source 2171 would be increased 2185 2188; and if the experience was poor then the priority level of said source 2171 would be decreased 2185 2188; and if the experience was negative or any reason then the ARM boundary would be set for varying levels of exclusion 2186 2187, right up to a complete block 2188. In some examples varying scales 2188 2189 may be used to set ARM boundaries such as priority boundary(ies) 2180 2184 and/or exclusion boundary(ies) 2182 2186, such as the seven-point scale used herein (though numerous types of scales are known, and may be employed appropriately). In some examples a seven-point scale for priorities 2180 2184 through exclusions 2182 2186 includes almost half that scale employed for priorities such as “top priority” 2189, “strongly preferred” 2189 and “somewhat preferred” 2189. In some examples a clear non-preferential midpoint maybe may be included such as “neutral” 2189 which neither prioritizes nor excludes said source 2171. In some examples said seven-point scale 2188 2189 includes almost half that scale employed to filter exclusions such as “somewhat blocked” 2189, “usually blocked” 2189 and “completely blocked” 2189.

In some examples after adding a priority boundary 2180 2181, adding an exclusionary filter 2182 2183, or modifying an existing priority/exclusion 2184 2185 2186 2187 by selecting the preferred level for a source 2171 from the boundary's scale 2188 2189, that boundary may be saved to a priority/filters database 2179. That boundary and said user's preference then becomes available for rapid display and selection 2178, where it may either be used 2177 or not used 2177; then, if selected and used by another person or identity it may be adapted to fit another user's preferred level of prioritization/exclusion 2178 for that source 2171.

In some examples In some examples after adding a priority boundary 2180 2181, adding an exclusionary filter 2182 2183, or modifying an existing priority/exclusion 2184 2185 2186 2187 by selecting the preferred boundary level for a source 2171 from the boundary's scale 2188 2189, that boundary may be saved to a user's profile 2190 where it may be retrieved and used by an identity 2172. In some examples said ARM boundary for priorities/exclusions 2176 is not altered so in that case another ARM boundary may (optionally) be modified 2194. In some examples after completing the modification of said ARM boundary 2176 and saving said updated ARM boundary 2190, a person 2172 or identity 2172 may (optionally) choose to modify another ARM boundary based on the experience of that source 2194. In some examples other ARM boundaries that may be set (as described elsewhere in more detail) include a Paywall 2194, protection 2194, etc.

In some examples after desired ARM boundary modifications are complete 2175 2176 2194 said ARM boundaries settings process(es) ends 2195, and said updated ARM boundaries are applied 2195.

Typical current displays on televisions, computers, digital picture frames, electronic pads, tablets, cell phones, etc. are “unreal” in that their displayed images are fixed and do not have the changing field of view that is easily seen by looking through any window and moving from side to side or stepping forward and back, nor do they have parallax shifts when the screen's user changes position and obtains a new perspective (e.g., a new line of sight).

In some examples a subsystem that may be optionally added to varied devices is a Superior Viewer Sensor (herein SVS) which automatically and/or manually updates and controls a visual display(s) based on the position of one or a plurality of viewers relative to said display, in order to simulate the changing real view that is seen through a real window. In some examples this provides TPDP (Teleportal Digital Presence) with an automated simulation of views through a real window so that as one or a plurality of viewers move relative to the device's screen the image displayed is adjusted to match the position(s) of the viewer(s). Because an SVS is digital it may also provide other digital features and functions.

As a result of an SVS subsystem, a viewer becomes a “superior viewer” because the viewer's “normal” digital presence may be seen, heard, experienced, manipulated, used and understood in more detail and in more ways than the physically present local world is generally experienced—making digital presence in some examples a richer, wider, more varied, simultaneously multiplied (with more views and/or locations at once), interesting and controlled experience than one's local physical presence. Therefore, in some examples an SVS subsystem produces a simulation of the view through a window by means of a display screen, as well as digitally enhanced views and sounds of what is displayed by means of digital video processing and/or digital audio processing. In some examples an SVS subsystem is comprised of a device such as devices illustrated in FIG. 44 and described elsewhere; real-time video processed by said device and/or stored video or images; a display screen that displays said video and/or images; a sensor that detects and locates one or a plurality of observers with respect to said display screen; a display control system, method or process that automatically adjusts the image displayed based upon the location of one or a plurality of observers with respect to the display screen; and optional digital visual enhancements and digital audio enhancements where said display control system, method or process adjust the image(s) and/or sounds based upon a command(s) provided by one or a plurality of observers.

In some examples an SVS subsystem may be provided entirely within a single local device; in some examples parts of an SVS subsystem may be distributed such that various functions are located in local and remote devices, storage, and media so that various tasks and/or program storage, data storage, processing, memory, etc. are performed by separate devices and linked through a communication network(s). In some examples one or a plurality of an SVS subsystem's functions may be provided by means other than a device subsystem; in some examples one or a plurality of an SVS subsystem's functions may be provided by a network service; in some examples one or a plurality of an SVS subsystem's functions may be provided by a utility; in some examples one or a plurality of an SVS subsystem's functions may be provided by a network application; in some examples one or a plurality of an SVS subsystem's functions may be provided by a third-party vendor; and in some examples one or a plurality of an SVS subsystem's functions may be provided by other means. In some examples the equivalent of an SVS subsystem may be provided by means other than a device subsystem; in some examples the equivalent of an SVS subsystem may be a network service; in some examples the equivalent of an SVS subsystem may be provided by a utility; in some examples the equivalent of an SVS subsystem may be a remote application; in some examples the equivalent of an SVS subsystem may be provided by a third-party vendor; and in some examples the equivalent of an SVS subsystem may be provided by other means.

Together, FIG. 44 through FIG. 48 illustrate some examples of an SVS subsystem(s). FIG. 44, “SVS (Superior Viewer Sensor) Devices”: In some examples a device's display is controlled by means that include face recognition to determine one or a plurality of viewers' position(s) relative to the screen and adjusting the view display based on the viewer's position to reflect a naturally changing field of view. In some examples additional processing may be performed under the command of one or a plurality of users such as zooming in or out; freezing an image; displaying a fixed viewpoint; utilizing face recognition or object recognition; retrieving data about a viewed or recognized identity or object; boosting faint audio for clarity; cleaning up noisy audio; adding various types of effects, edits, substitutions, etc. to any of the IPTR displayed; or providing any other digital processing or manipulation. In some examples these additional types of digital commands and processing may be saved as a default, setting, configuration, etc. so that device may subsequently provide continuous digital reality(ies) that include a viewer's preferred digital alterations or enhancements.

FIG. 45, “LTP Views with an SVS (example)”: In some examples an SVS provides a changing field of view for a viewer as illustrated by a view from an RTP on the Grand Canal in Venice, Italy, during sunset. When the same viewer stands on the right side of an LTP, the center of an LTP, and then the left side of an LTP, the view displayed is changed appropriately. In some examples a viewer may employ SVS commands (such as by a handheld remote control) in order to zoom in to see details along the Grand Canal. In some examples a viewer may converse with a local person by means of an RTP (such as a gondolier in Venice, with language translation provided by a different subsystem). In some examples automatic audio enhancement determines if each participant's voice is below sufficient audio quality and may isolate and boost that person's speech to sufficient clarity and volume; and in some examples said audio speech enhancement may be invoked manually.

FIG. 46, “SVS Process”: In some examples an SVS includes one or a plurality of viewer sensors, a viewer detecting section, and an optional viewer processing section. In some examples an SVS may adjust luminance, in some examples an SVS provides viewer detection to detect the presence and/or optional orientation of one or a plurality of viewers. In some examples optional viewer recognition is performed for various purposes such as prioritizing how the field of view is changed to reflect the viewing position(s) of one or a plurality of identified and prioritized viewers. In some examples an SVS automatically detects when device use begins (as described herein and elsewhere) and automatically initiates device operation(s) such as in some examples to provide continuous digital reality. In some examples an SVS command is entered and performed on one or a plurality of views, and in some examples an SVS command(s) is saved for automatic application in the future. In some examples an SVS automatically determines when non-use occurs (as described herein and elsewhere) and automatically puts the device into a powered down or waiting state until use begins.

FIG. 47, “SVS Changing Field of View due to Viewer Horizontal Location(s),” and FIG. 48, “SVS Changing Field of View due to Viewer Distance from Screen”: In some examples one or a plurality of SVS(s) calculates the image(s) displayed by determining the horizontal and distance location(s) of one or a plurality of viewers in relation to the center of a display screen (or in some larger displays in relation to the center of a plurality of screens). In some examples the received image is larger than the viewing area of the display screen so that as a viewer moves a responsively adjusted region of the received image may be displayed in the appropriate region (such as a “window”) of a device's screen.

Superior viewer sensor devices: Turning now to FIG. 44 “SVS (Superior Viewer Sensor) Devices,” in some examples an LTP (Local Teleportal) 1402 may include an SVS subsystem; in some examples an MTP (Mobile Teleportal) 1402 may include an SVS subsystem; in some examples an RTP (Remote Teleportal) 1403 may include an SVS subsystem; in some examples an AID/AOD (Alternate Input Device/Alternate Output Device) 1404 as described elsewhere may include an SVS subsystem; in some examples a Subsidiary Device 1405 as described elsewhere may include an SVS subsystem; and in some examples other types of devices may include an SVS subsystem. In some examples said devices 1402 1403 1404 1405 are connected by one or a plurality of disparate networks 1401; in some examples parts of an SVS subsystem may be distributed such that various functions are located in local and remote devices, storage, and media so that various tasks and/or program storage, data storage, processing, memory, etc. are performed by separate devices and linked through said network(s) 1401; in some examples the equivalent of an SVS subsystem may be provided by means other than a device's local subsystem and provided over said network(s) 1401.

In some examples said SVS subsystem has a process 1406 that in some examples starts when said device is on 1407 and when said device has an SVS 1407 that is active; in some examples face detection is performed 1408 by said SVS; in some examples if one or a plurality of detected faces is turned toward the display screen then an active face(s) has been detected 1409; in some examples SVS processing determines the location of one or a plurality of viewers with respect to the display screen 1411 and the appropriate displayed video(s) and/or image(s) are adjusted 1411 based on the distance or angle of the viewer(s) to simulate the view through a window 1411; in some examples no active face(s) is detected 1409 and in some examples the SVS subsystem then goes into its default waiting state 1410, in some examples the SVS subsystem's default is to detect movement on the part of a viewer(s) 1410, and in some examples the SVS subsystem may include a motion detector 1410, and in any of these cases the SVS subsystem performs face detection again 1408; in some examples one or a plurality of viewers may enter an SVS command 1412 in which case the SVS processing performs said SVS command(s) 1413 and performs the appropriate video or audio adjustment 1413 for said command, and/or performs a different and appropriate action 1413 for said command.

Because an SVS is digital said commands 1412 may provide enhanced digital features and functions such as in some examples zooming in to see details 1412; in some examples zooming out to see the big picture(s) 1412; in some examples freezing an image to analyze it 1412; in some examples displaying a fixed viewpoint like an ordinary computer screen view without dynamic SVS adjustment based on the viewer(s) position 1412 (as described elsewhere); in some examples utilizing recognition to identify an individual or an object and/or retrieve data about said individual or object 1412; in some examples enhancing audio for clarity 1412 (such as in some examples raising the volume of voices so fainter voices may be understood, in some examples increasing clarity by filtering noisy backgrounds, and in some examples providing other audio enhancements); in some examples recording and storing video, audio, still images, etc. for retrieval and use in the future 1412; in some examples changing the view or viewpoint (if a plurality of views are available) 1412; in some examples adding various types of effects, edits, substitutions, etc. to any of the IPTR displayed 1412; in some examples substituting an edited display as the source output with or without informing other participants of said edited alterations 1412; or performing any other digital manipulation 1412. Said digital functions may be performed by means of commands that may include gestures 1412 in some examples, voice 1412 in some examples, a remote control(s) 1412 in some examples, a touch screen 1412 in some examples, on-screen controls 1412 in some examples, a pointing device(s) 1412 in some examples, a 3-D controller 1412 in some examples, a menu 1412 in some examples, etc.; and in some examples providing other types of controls 1412, controllers 1412, features 1412 and functions 1412.

In some examples SVS commands 1412 may be saved as defaults 1414, settings 1414, configurations 1414, or another storage means 1414 so that they may be performed automatically 1411 thereafter, without requiring the direct control of one or a plurality of users 1412. In some examples an SVS may therefore automatically produce a continuous digital reality(ies) 1411 that include the preferred digital alterations 1412 1414 and/or enhancements 1412 1414 desired by one or a plurality of users.

Superior viewer example views: Turning now to FIG. 45, “LTP Views with an SVS (example),” in some examples a viewer 1420a stands in front of the right side of an LTP 1422a while holding a remote control 1425 which provides one of multiple means to control said LTP 1422a, and hears audio from the remote location by means of audio speaker(s) 1424. In some examples that viewer 1420b has moved to the center of the LTP 1422b while continuing to hold a remote control that controls said LTP 1422b. In some examples that viewer 1420c has moved to the near left side of an LTP 1422c while continuing to hold a remote control that controls said LTP 1422c. As illustrated in FIG. 18 said viewer 1420a 1420b 1420c is connected in real-time with an RTP that is located on the Grand Canal in Venice, Italy, and is viewing it during sunset. By utilizing the RTP's wide and tall view of the Grand Canal an SVS subsystem can display varying simulated realistic window views in real-time to viewer 1420a 1420b 1420c.

In a first example said viewer 1420a has approached the LTP 1422a for a closer view of the Basilica of St. Mary of Health (Basilica di Santa Maria della Salute), a Roman Catholic church whose dome has become a landmark and emblem of Venice. In response to said change in the viewer's location 1420a an SVS sensor 1421a determines the new location of the viewer 1420a with respect to the LTP display screen 1422a, calculates 1423 and displays 1423 the appropriate view 1422a for said viewer's position 1420a to simulate the appropriate view through that “RTP window” in that location on the Grand Canal. In another example said viewer 1420b has stepped back from the LTP 1422b for a central view up the Grand Canal, and in response to said change in the viewer's location 1420b the SVS sensor 1421b determines the new location of viewer 1420b with respect to the LTP display screen 1422b, calculates 1423 and displays 1423 the appropriate view 1422b of the Grand Canal for said viewer's new position 1422b to simulate the appropriate view through that “RTP window” on the Grand Canal. Optionally, viewer 1420b may employ SVS commands by means such as a handheld remote control 1425 that control video processing 1423 and/or audio processing 1423 such as in some examples zooming in to see details, in some examples zooming out to see the big picture of the Grand Canal, in some examples audio zooming to hear specific sounds more clearly, etc.

In another example said viewer 1420c has stepped up close to the left side of the LTP 1422c for a close up view of a gondolier on Venice's Grand Canal, and in response to said change in the viewer's location 1420c the SVS sensor 1421c determines the new location of viewer 1420c with respect to the LTP display screen 1422c, calculates 1423 and displays 1423 the appropriate view 1422c of the gondolier and Grand Canal for said viewer's new position 1422c to simulate the appropriate view through that “RTP window” on the Grand Canal. Because said gondolier seems close enough, viewer 1420c calls “Hello” to gondolier and because the local RTP on the Grand Canal is full-featured, said viewer's voice is projected from the local RTP's speaker(s). If the gondolier answers “Ciao” in Italian in some examples an automatic translation subsystem contextually identifies participants in the United States and Italy, that the US participant spoke the English word “hello” and the Italian participant responded in that language, and provides automatic real-time language translation as described elsewhere. In some examples US viewer 1420c may need to use a command or the handheld remote control 1425 to start a translation subsystem, service, application, etc. If a conversation ensues between said US viewer 1420c and said gondolier, in some examples automatic audio enhancement contextually identifies the appropriate remote participant(s) which in this case is a gondolier, and determines if said gondolier's voice is below sufficient audio legibility, and if so isolates and boosts said gondolier's voice audio to increase its clarity and volume by means such as noise cancellation, equalization, dynamic volume adjustment, etc. In some examples US viewer 1420c may need to use a command or the handheld remote control 1425 to start audio enhancement processing application, subsystem, service, etc. As a result in some examples a US viewer 1420c may talk directly to a passing gondolier on Venice's Grand Canal.

In some examples a device or a device SVS includes one or a plurality of viewer sensors, a viewer detecting section, an optional viewer processing section and other device components as described elsewhere, such as in some examples display output processing 1252 in FIG. 31. In some examples one or a plurality of sensors may be employed individually or in combination to provide viewer detection and viewer location with respect to a device's display screen which in some examples is imaging such as by means of a camera(s), in some examples is ultrasonic, in some examples is infrared, in some examples is radar, in some examples is a plurality of audio microphones, in some examples is a plurality of pressure sensors such as in a floor, and in some examples is other detection means. In some examples each type of sensor provides its own type of data such as image data from a camera, so each corresponding processing by a viewer detecting section analyzes the data provided by each type of sensor. In some examples as image data is provided by an image sensor a face detecting section detects an object's face area, face size, face orientation, skin color, or other cues depending upon that type of sensor. Similarly, each type of sensor provides its corresponding data types such as the use of audio cues when the sensor(s) includes a plurality of microphones that determine presence and position by means of audio sounds and levels.

In some examples one object of a device's sensor is to detect certain characterizing components of objects such as the face of a person relative to a device's screen, herein generally referred to as viewer detection. In some examples said viewer detection includes detecting one or a plurality of objects, then detecting a section of said object that characterizes a portion of said object, then detecting a human face as the characterizing portion. In some examples a number of known technologies may be employed such as in some examples technologies used in digital cameras to determine the presence of faces in a picture taking region, determine the distance to the detected faces, and employ that data to set the camera's focus so that one or a plurality of faces is automatically rendered clearly and in focus when a picture is taken. In addition, other known facial analysis technologies provide various types of face data analysis such as technologies used in digital cameras that determine when a face in a picture has blinked and then display a “blink error” or “blink warning” to the picture taker so the picture can be checked and retaken if needed. In some examples other face detection technologies are known for detecting one or a plurality of viewers with respect to a display screen such as the identification and use of skin colors, identification of candidate face region areas with hierarchical verification levels, etc.

In some examples the term viewer detecting section refers to software that is run by a device's processor(s), but with alternative types of sensors and sensor data this viewer detection may be implemented by different detection software, or alternatively by a hardware circuit or system. In some examples the viewer detection software is stored in a device's local and/or remote storage, said software is run, and the resulting processed viewer detection data such as viewer information, face size, face position, face orientation, etc. is stored in said device's memory. In some examples said device uses this processed viewer detection data in memory to adjust the device's display screen appropriately for the location(s) of one or a plurality of viewers. Said viewer detection data is retained in memory for repeated use until viewer detection is performed again, at which time newly processed viewer detection data overwrites it and is stored for use until the next viewer detection occurs.

Turning now to FIG. 46, “SVS Process,” illustrates some examples in which a device that includes an SVS (as used herein, the term SVS also includes any type of viewer sensor[s]) is turned on 1436 and the SVS is turned on 1436 and active 1436. In some examples the SVS sensor is a camera or other sensor that employs light, in which case an initial step is to measure luminance 1437 to determine if sufficient luminance is present 1438 because if there is insufficient luminance viewer detection that is based on images will produce erroneous results. Luminance may be measured 1437 by using image data from said SVS to determine if it possesses sufficient luminance 1438 to perform viewer detection 1440. If sufficient luminance is present 1437 1438 then viewer detection 1440 may be performed. If sufficient luminance is not present 1437 1438 the process performs a luminance adjustment step 1439 and then repeats the luminance measurement step 1437 to determine if there is sufficient luminance 1438. Sufficient luminance may be secured 1439 by one or a plurality of means such as in some examples opening a camera aperture 1439, in some examples increasing an image sensor's sensitivity 1439 such as by raising its ISO, or by other known means (such as in some examples means that are employed in video cameras that record acceptable images at extremely low lux levels). In the event luminance adjustment 1439 is performed and the subsequent luminance measurement 1437 indicates sufficient luminance 1438 is not present, then said luminance adjustment step 1439 is repeated with increased values and/or additional luminance sensitivity means until sufficient luminance 1438 is obtained and viewer detection may be performed 1440.

In some examples viewer detection 1440 is image-based and performed by an SVS. Said image-based viewer detection 1440 starts by detecting a moving image, capturing it by means of an image sensor and analyzing the captured image data for face detection information such as skin color, face image(s), face size, face position, etc. At step 1441 it is determined whether one or a plurality of viewers has been detected and if no viewers are detected 1442 then the SVS and display are auto-set for a default 1447 viewer who is located centrally in front of the display and at a reasonable distance from it for that type of device (which may be reasonably estimated from known ergonomic data for certain types of mobile devices and certain types of stationary devices). Alternatively in some examples with a device in a fixed location, if no viewers are detected 1442 the SVS and display may be auto set for a default 1447 that is based upon the entrance to the room in which said device is positioned so that the entrance of a viewer will trigger the SVS and cause its display to respond dynamically as said viewer moves into and through that room. Alternatively in some examples with a mobile or fixed device, if no viewers are detected 1442 the SVS and display may be auto-set for a default 1447 that represents the most common viewer location from which this display has been used in the past (if that device's previous viewer location raw data is stored and analyzed, with the analyzed data stored for future uses such as determining said default display setting). In some examples if no viewers are detected 1442 the SVS may loop in a motion detection process in which it repeatedly and periodically performs motion detection 1440 (such as in some examples periodically capturing two or a plurality of frames of image data and performing a motion detection comparison between them).

In some examples the processing of SVS sensor data determines that one or a plurality of viewers are present 1441 in which case the detected viewer data is stored in memory and used to perform display adjustment 1447. In some examples other viewer engagement data may be stored in memory 1440 such as in some examples participation in a focused connection, in some examples other uses of a device as described elsewhere. Said viewer detection data as well as other viewer engagement data is retained in memory until viewer detection is performed again, at which time newly processed viewer data overwrites it and is retained in memory until the next viewer detection is performed. Storing viewer detection data and viewer engagement data makes it possible to determine the presence of one or a plurality of viewers, along with the optional partial or full engagement of said users with the display. In some examples sufficient or appropriate sensor data 1440 is available in memory so that an optional viewer processing section determines the viewer(s) orientation relative to the display screen 1445. In some examples where a face(s) has been detected 1441 the position, size and/or orientation of said face data 1441 may be used to determine the orientation 1445 of one or a plurality of viewers relative to the display screen 1446 as an indication of each viewer's partial or full attention to said display. In some examples viewer engagement 1446 includes audio sensor data 1440 and in some examples it includes data from other types of sensors. In some examples if one or a plurality of viewers are not engaged 1446 the viewer processing section may loop and repeatedly and periodically perform viewer engagement processing 1445 (such as in some examples periodically capturing a set of frames of image data and performing a face orientation comparison between them). In some examples if one or a plurality of viewers are not engaged 1446 the display may be adjusted to its default 1447 as described elsewhere. In some examples if one or a plurality of viewers are partly engaged 1446 such as in some examples by talking to each other in addition to paying intermittent attention to the display 1446; in some examples by using other handheld devices or mobile devices or stationary devices as well as paying intermittent attention to the display 1446; in some examples by multitasking as well as paying intermittent attention to the display 1446; in some examples by any other simultaneous activity or engagement as well as paying intermittent attention to the display 1446; the optional viewer processing section determines that said partially engaged viewers should be treated as full viewers and included in the adjustment of the display. In some examples if one or a plurality of viewers are engaged 1446 the viewer processing section may periodically reconfirm said engagement by looping and performing viewer engagement processing 1445 (such as in some examples periodically capturing a set of frames of image data and performing a face orientation comparison between them).

In some examples a recognition subsystem 1443 (as described elsewhere) is present and said image adjustment 1447 may utilize said recognition subsystem 1443 to determine one or a plurality of specific viewers, such as the owner or principal user of a device. In some examples recognition subsystem 1443 may be a service such as TP biometric recognition 1443. In some examples one or a plurality of recognizable identities may be prioritized 1444 such as in some examples the owner of the device in use, in some examples family or friends of the owner of the device in use, in some examples a recognizable member of a designated group or category of users of said device such as a company's employees whose cubes or offices are located around a particular conference room where said device is used, in some examples any other designated identity(ies) and/or group(s). In some examples one or a plurality of recognized identities 1443 may be prioritized 1444 so that said display adjustment 1447 may be completely prioritized to reflect the presence 1441 and/or optional orientation(s) 1445 of one or a plurality of said identified 1443 and prioritized 1444 viewers, such as by performing display adjustment 1447 as if only the identified 1443 and prioritized 1444 viewer(s) were present. In some examples one or a plurality of recognized identities 1443 may be prioritized 1444 so that said display adjustment 1447 may be partly prioritized to reflect the presence 1441 and/or optional orientation(s) 1445 of one or a plurality of said identified 1443 and prioritized 1444 viewers, such as by weighting the identified 1443 and prioritized 1444 viewer(s) at the same higher value than a lower weighting for unidentified 1443 and unprioritized 1444 viewer(s). In some examples one or a plurality of recognized identities 1443 may be prioritized 1444 so that said display adjustment 1447 may be differentially prioritized based on the different identities of recognized viewers 1443 to reflect the presence 1441 and/or optional orientation(s) 1445 of one or a plurality of said identified 1443 and differentially prioritized 1444 viewers, such as by providing different weights for each identified 1443 and prioritized 1444 viewer as well as providing a lower weighting for unidentified 1443 and unprioritized 1444 viewer(s).

In some examples viewer detection 1440, optional viewer orientation 1445, and/or optional viewer engagement 1446 determines the one or a plurality of viewers and their position(s) with respect to the display. Since a device's output automatically adjusts 1447 based upon the position of one or a plurality of viewers, including dynamic changes in the position(s) of a viewer(s), the adjustment process is as follows and as described elsewhere. In some examples one viewer is detected 1440 1441 1445 1446 and the position of said viewer is determined with respect to the display, and in some examples the processor determines metrics for said user such as the viewer's angle from the center of the display in some examples, the viewer's distance from the center of the display in some examples, or other alignment metrics in some examples; and said position metrics are used to determine how the display should be adjusted 1447 to serve that viewer; and in some examples processing provides a corresponding positioning for the “window” output 1252 in FIG. 31 that simulates the view that is seen through a real window. In some examples a plurality of viewers is detected 1440 1441 1445 1446 and the positions of said viewers are determined with respect to the display, and in some examples the display is adjusted 1447 based on a median or average viewing position of the collection of viewers that are recognized; that is, the metrics for each user are determined individually, then the set of two or more viewers' positions are determined with respect to the display screen, and the processing provides the average or best corresponding positioning for the window output 1252 that simulates the view seen through a real window.

In some examples after detecting one or a plurality of viewers 1440 1441 1445 1446 and adjusting said output display 1447 there is a change in the position of one or a plurality of viewers; and in some examples after detecting one or a plurality of viewers 1440 1441 1445 1446 and adjusting said output display 1447 there is a change in the number of viewers who are partially or fully engaged 1446 with the display; either individually or in combination various changes serve as a trigger(s) to perform viewer detection 1440 and repeat the appropriate steps that update the viewer data in memory so that processing may determine the corresponding adjustments of the display 1447 that synchronize its displayed “window” with the new location(s) and/or new collection of one or a plurality of recognized viewers. In some examples after detecting one or a plurality of recognized viewers 1440 1441 1445 1446 said viewers are automatically tracked by a SVS so that changes in their position(s), the addition of a new viewer(s), and/or the exiting of a recognized viewer(s) triggers viewer detection 1440 and an appropriate corresponding updating of the displayed “window” 1447. In some examples after detecting one or a plurality of recognized viewers 1440 1441 1445 1446 a subset of said viewers' behaviors, cues, or task indicators are tracked by a SVS so that changes in said tracked cues, behaviors, task indicators, etc. trigger viewer detection 1440 and corresponding updating of the “window” displayed 1447.

In some examples one or a plurality of settings that control the frequency, timing, smoothness, transitions, and other attributes of said display adjustments 1447 may optionally be set and saved 1448. In some examples this provides for different types of devices to employ display adjustments 1447 such as for example when a device has insufficient processing or bandwidth for smooth real-time display adjustments it may utilize settings for periodic adjustments with a specified type of transition such as a jump cut or page turn from one display view to the next display view. In some examples when said attributes are stored 1448, then retrieve and apply said attributes 1448 at the start of said displays 1447 and continue applying said attributes 1448 to subsequent display adjustments 1447 until said attributes are edited and the updated settings are saved and stored 1448.

Because the resulting display 1447 is digital, in some examples a viewer may choose to utilize various SVS commands 1449 that alter the display 1450 1447 in one or a plurality of ways. A range of commands, subsystems, services, applications, tools, resources, etc. may be used to implement those digital capabilities 1450 1447 including any known technology or service. Without limiting these digital capabilities some examples include in some examples zooming in or out 1449 1450 1447; in some examples changing the display's view 1449 1450 1447; in some examples taking a static snapshot of a display 1449 1450 1447; in some examples performing various types of analysis on live video or on a static image or snapshot 1449 1450 1447; in some examples identifying an identity or object in a display 1449 1450 1447; in some examples retrieving information about an identified identity, object, etc. 1449 1450 1447; in some examples enhancing audio so that remote conversations, sounds, etc. are heard clearly 1449 1450 1447; in some examples making visible or surreptitious recordings 1449 1450 1447; in some examples altering and/or editing the display, its participants, location or content in real-time 1449 1450 1447; in some examples substituting an edited display as source output with or without informing other participants 1449 1450 1447; in some examples recording an edited display as if it were a source event with or without adding information that an altered display was recorded 1449 1450 1447; or in some examples performing other real-time digital manipulations. In some examples SVS commands may be entered 1449 1450 by voice and one or a plurality of wired and/or wireless microphones; in some examples SVS commands may be entered 1449 1450 by gestures; in some examples SVS commands may be entered 1449 1450 by a handheld remote control; in some examples SVS commands may be entered 1449 1450 by a touchscreen; in some examples SVS commands may be entered 1449 1450 by visible on-screen controls; in some examples SVS commands may be entered 1449 1450 by pointing devices; in some examples SVS commands may be entered 1449 1450 by many systems; in some examples SVS commands may be entered 1449 1450 by any known type of software or hardware control or controller. In some examples of commands entered 1449 such as in some examples “right” 1450, in some examples “left” 1450, in some examples “down” 1450, in some examples “up” 1450, in some examples “zoom in” 1450, in some examples “zoom out” 1450, in some examples “recognize identity(ies)” 1450, in some examples “retrieve (identity name's) data” 1450, in some examples “make (identity name) invisible” 1450, in some examples “track (identity name)” 1450, in some examples “start (or pause or stop) recording” 1450, or any other available command 1449 1450 device processing provides the appropriate command(s) and/or processing steps to the appropriate display output(s) 1450 or to the appropriate digital processing application(s) 1450 in some examples to move the image(s) displayed the appropriate amount 1450, in some examples to carry out the corresponding digital image processing functions 1450, in some examples to utilize local device and/or remote resources to perform said commands 1450.

In some examples commands entered 1451 may be to set 1451, edit 1451 and/or save 1452 attributes of the SVS subsystem such as in some examples the sensitivity of luminance measurement 1437 and/or luminance adjustment(s) 1438 1439 (if an SVS sensor incorporates light); in some examples settings for viewer detection features 1440; in some examples selecting from a set of default(s) 1442 when viewers are not detected 1441; in some examples motion detection parameters 1442 when viewers are detected 1441 or in some examples when viewers are not detected 1441; in some examples the complete use, weighted use or non-use of a recognition subsystem 1443 1444 if a recognition subsystem is present; in some examples the timing of a display's responses to facial orientation changes 1445 to permit a viewer to have intermittent facial orientation toward other people or tasks before the display is changed; in some examples the timing for adjusting the display 1447 such as in some examples smooth real-time scrolling 1447, in some examples threshold-based jump cuts 1447, in some examples wipes 1447, in some examples scrolling 1447, in some examples other types of transitions between display adjustments 1447; in some examples the various attributes of each display command 1449 1450; in some examples automatic device operation 1453 1454 1455 when use is ending; in some examples any other SVS display or digital command setting(s) that may be saved and retrieved for use in the future. In some examples said saved setting(s) 1451 1452 are retrieved and applied to the operation of each subsystem feature and capability to which each setting applies.

In some examples when the use of a device with an SVS subsystem ends 1453 if the device remains on and is not turned off then after a defined period of non-use 1454 the device is timed out and set to a default such as in some examples a blank display screen 1454, in some examples a standby state 1454, in some examples everything powered down except motion detection and corresponding processing for detected motions 1454 that trigger a device “wake up” process if sufficient motion is detected 1454 with a resulting re-start of the SVS process 1437. In some examples of ending use 1453 a device is turned off 1455, in some examples the device is powered down 1455, in some examples the device is taken off line 1455, in some examples the device is put into another non-use state or mode 1455 with a resulting re-start 1436 when said device is turned on 1436 and its SVS is on and operating 1436. In some examples device use continues 1453 1440 and use is not interrupted.

Superior viewer field of view changes: In some examples an SVS determines the image(s) displayed by determining the location(s) of one or a plurality of viewers in relation to a display screen, and utilizing the viewer(s)'s angle and/or distance to adjust the image(s) displayed, stimulating a view through a real window to said viewer(s). In some examples said simulated view on said display screen is dynamically updated to reflect the changing location(s) of one or a plurality of viewers in relation to said display screen by means of one or a plurality of SVS sensors as described elsewhere. In some examples the image(s) received for display are from one or a plurality of remote lenses with a wide enough angle and high enough resolution so that the portion of said received image(s) that is displayed may be adjusted rapidly, smoothly and in real-time to respond directly and quickly to the changing location(s) of said viewer(s). In some examples this process is utilized with stored pre-recorded images whether they are from natural sources such as the real world, from pre-recorded entertainment programs, from synthesized and blended realities such as described elsewhere, or from other stored sources. Alternatively, in some examples said received images may be from one or a plurality of remotely located cameras that have remotely controlled motorized camera functions such as panning, filtering, zooming, etc. and whose images are displayed directly on the display screen; in some examples changes in the location(s) of one or a plurality of viewers with respect to the display screen causes appropriate corresponding commands to be sent to said remotely controlled cameras to adjust their individual remote camera view(s) by panning, tilting, zooming, etc. to provide said simulated view(s) through a real window on said display screen. Alternatively, in some examples said received image(s) may be received from any AID/AID (as described elsewhere) and/or any TP device (as described elsewhere) with a camera function and communication capability for live viewing, and/or with a camera function and storage capability for viewing stored images.

Turning now to FIG. 47, “SVS Changing Field of View Due to Viewer Horizontal Location(s),” in some examples the received image 1460A is larger than the viewing area of a display screen 1462A that in some examples is mounted on a wall 1461A. In some examples an SVS sensor determines the location of a viewer 1464A as described elsewhere. For located viewer 1464A, a horizontal portion 1465A of said received image 1460A is displayed in said display screen's viewing area 1462A as determined by a viewer's angle 1468A between an imaginary line 1467A that is perpendicular to the display screen's center 1466A and an imaginary line between said viewer 1464A and the center of the display screen 1466A. In some examples a plurality of viewers is detected and the location of each viewer with respect to the display screen 1462A is determined by said SVS subsystem as described elsewhere; in some examples for each viewer 1464A that viewer's angle 1468A is determined based on an imaginary line between said viewer 1464A and the center of the display screen 1465A, and in some examples the displayed portion 1465A of the image received 1460A is selected based on a median or average viewing angles of the collection of viewers that are detected and located; that is, the angle 1468A for each viewer 1464A is determined individually, then the set of viewers' angles are determined with respect to the display screen, and known processing means provides the average or best corresponding positioning for the simulated window displayed 1465A that simulates the view seen through a real window from that average or median viewing location. In some examples a plurality of viewers is detected and a recognition subsystem is present and employed to determine the identity of said detected viewers; in some examples a subset of detected viewers is selected based upon identity recognition, with varying preset prioritization or weighting based upon the identity of each recognized viewer (such as the highest priority for the owner of the device in use); and therefore in some examples the simulated window position 1465A that is displayed 1462A provides a more realistic simulated window view for one or a plurality of recognized and prioritized detected viewers.

In some examples a viewer 1464A moves 1464C with respect to the display screen 1462A 1462B, with a change such as from location 1464A to location 1464B with respect to said display screen. Since received image 1460B is larger than the viewing area of the display screen 1462B that in some examples is mounted on a wall 1461B, in some examples an SVS sensor determines the new location of the viewer 1464B as described elsewhere. For located viewer 1464B, a responsively adjusted horizontal portion 1465B of said received image 1460B is displayed in said display screen's viewing area 1462B as determined by said viewer's new angle 1468B between an imaginary line 1467B that is perpendicular to the display screen's center 1466B and an imaginary line between said viewer 1464B and the center of the display screen 1462B. In some examples a subsystem employs means (as described elsewhere) to determine the location of one or a plurality of viewers based on their individual angle(s) with respect to said display screen; and in some examples said subsystem employs known processing means to calculate and select the appropriate image(s) 1465A 1465B for each respective viewer location 1464A 1464B as well as the (optional) dynamic transition(s) as said viewer moves 1464C between locations, in order to simulate a real window's view for the one or a plurality of viewers.

In some examples a viewer starts in position 1464B with angle 1468B with respect to an imaginary line 1467B that is perpendicular to the center 1466B of the plane of the display screen 1462B, which is on the right side of said display, so the portion of received image 1460B determined by processing is the left side of received image 1460B, which is a centered on the Basilica of St. Mary of Health (Basilica di Santa Maria della Salute) 1465B on Venice's Grand Canal. If said viewer keeps a constant distance from said display screen but moves his or her location to the left side of said display with angle 1468A with respect to an imaginary line 1467A that is perpendicular to the center 1466A of the plane of the display screen 1462A, processing would adjust the display to correspond to said viewer's new position 1464A and show the right portion 1465A of received image 1460A.

In some examples said display screen alteration 1465A 1465B in response to said viewer's location change 1464A 1464B with respect to a display screen 1462A 1462B, as well as additional SVS digital display functions as described elsewhere, may be provided by an application designed for use with one or a plurality of display devices that utilize an appropriate viewer sensor and processing means to adjust the image(s) displayed in order to simulate a dynamic window view to one or a plurality of viewers; with said application stored as code on either local storage, remote storage for both; with said application available as a computer program product, a downloadable application, a network service, or in another format. Said application consisting of means for receiving and displaying one or a plurality of images; means for determining the location(s) of one or a plurality of viewers with respect to said display; means for calculating and displaying an appropriate portion of said received image(s) based on angle and/or distance of one or a plurality of viewers from said display; and means for outputting the appropriate portion(s) of said received image(s) on said display screen in order to simulate a dynamic view through a live window for one or a plurality of viewers.

Turning now to FIG. 48, “SVS Changing Field of View Due to Viewer Distance from Screen,” in some examples received image 1470A is larger than the viewing area of a display screen 1472A that in some examples is mounted on a wall 1471A. In some examples an SVS sensor determines the location of a viewer 1473A as described elsewhere, wherein said viewer location comprises the distance 1474A between said viewer 1473A and the center of said display screen 1472A; and based on said distance 1474A displays a portion 1475A of said received image 1470A.

In some examples of plurality of viewers is detected and the distance 1474A of each viewer from the center of said display screen 1472A is determined by said SVS subsystem as described elsewhere; in some examples for each viewer 1473A that viewer's distance 1474A from the center of said screen, and in some examples the displayed portion 1475A of the image received 1470A is selected based upon a median or average viewing distance of the collection of viewers that are detected and located; that is, the distance 1474A for each viewer 1473A is determined individually, then the set of viewers' distances are determined with respect to the display screen, and known processing means provides the average or best corresponding simulated window displayed 1475A that simulates the view seen through a real window from that average or median viewing location. In some examples a plurality of viewers is detected and a recognition subsystem is present and employed to determine the identity of said detected viewers; in some examples a subset of detected viewers is selected based upon identity recognition, with varying preset prioritization or weighting based upon the identity of each recognized viewer (such as the highest priority for the owner of the device in use); and therefore in some examples the simulated window position 1475A that is displayed 1472A provides a more realistic simulated window view for one or a plurality of recognized and prioritized detected viewers.

In some examples a viewer 1473A moves 1474C closer with respect to the display screen 1472A 1472B, with a change such as from location 1473A to location 1473B with respect to said display screen. Since received image 1470B is larger than the viewing area of the display screen 1472B that in some examples is mounted on a wall 1471B, in some examples and SVS sensor determines the new location of the viewer 1473B as described elsewhere. For located viewer 1473B, a responsively adjusted portion 1475B of said received image 1470B is displayed in said display screen's viewing area 1472B as determined by said viewer's new distance 1474B from the center of said display screen 1472B. In some examples a subsystem employs means (as described elsewhere) to determine the distance of one or a plurality of viewers based upon their individual distance(s) with respect to the center of said display screen; and in some examples said subsystem employs known processing means to calculate and select the appropriate image(s) 1475A 1475B for each respective viewer location 1473A 1473B as well as the (optional) dynamic transition(s) as said viewer moves 1474C between locations, in order to simulate a real window's view for the one or a plurality of viewers.

In some examples the distance 1474B of viewer 1473B from the center of said display screen 1472B corresponds to the distance and lens size at which said received image 1470B is acquired, so that the image received 1470B may be displayed directly on the display screen 1472B; in an example close to that the distance 1474B of 1473B from the center of said display screen 1472B is only slightly different from the distance and lens size at which said received image 1470B is generated, so that the image received 1470B may be adjusted only slightly 1475B before being displayed on the display screen 1472B.

In some examples the distance of viewer 1473B changes such as to the distance of viewer 1473A in which said new distance 1474A increases by distance 1474C, so that the process and adjusted displayed image 1475A is zoomed in and magnified on said display screen to simulate a real window's view at new distance 1474A. As this example illustrates, changes in viewer distance from said display screen may result in some examples in digitally zooming in and in some examples digitally zooming out from the received image(s), or in some examples selecting between a plurality of received images that are gathered with different lenses of different zoom magnifications and then adjusting the appropriately sized image to match a viewer's corresponding distance from a display screen and displaying said appropriately selected and appropriately adjusted image on the display screen.

In some examples a display screen 1462A 1462B 1472A 1472B is flat, one or a plurality of viewers 1464A 1464B 1473A 1473B are detected with respect to said display screen and the location(s) of said viewer(s) is based on the angle(s) of said viewer(s) with respect to an imaginary line 1467A 1467B that is perpendicular to the center of said display screen and a line that extends between one or a plurality of viewers and the center of said display screen, and the location(s) of said viewer(s) is also based on the distance of one or a plurality of viewers from the center of said display screen; and in some examples said subsystem employs known processing means to calculate and select the appropriate image(s) for the location(s) of one or a plurality of viewer(s) as well as the (optional) dynamic transition(s) as said viewer(s) move between locations, in order to simulate a real window's view for the one or a plurality of viewers.

Continuous Digital Reality Subsystem/Service: When a user stands up and looks out a physical window the world is already there, without any need to turn the outside on when looking at the window, or turn the window off when the user leaves the room. Similarly, when a user goes to a closed door and opens it and walks through the door the next room or the outside is already there, without any need to turn on the new place, or any need to turn off the place after leaving it. “Physical reality” is always “present” and “sensible” whenever we are in it, when we turn to view it, or when we enter a new place. In the ARTPM “digital reality” works in a parallel way to “physical reality”—the user's digital reality is continuous and present, but this is produced electronically so that digital reality is automatically visible, usable and ready. In some examples users do not need to take the steps required by current electronic devices and digital communications, where each device must be turned on and off (like booting a PC, then loading video conferencing software and using it to select someone to call, then using it to make a video phone call); and each current electronic device's connection must be made separately (like making a mobile phone call or starting and setting up a video conference); and in our current digital electronic devices when most “uses” are ended a device's use is finished and that feature must be closed or the device must be turned off, like running shutdown on a PC, using a remote to turn the power off on a television, or hanging up a phone call.

Automated On/Off/On/Off Devices: Many consumer electronic devices attempt to simplify turning devices on and off somewhat by adding immediate on/off, which is often achieved by means of a power-down state where a device's most recent operation(s) is suspended and saved (such as a home theater's settings when that system includes multiple linked devices), ready to be resumed in that state when power is restored. For example, a major PC annoyance is being forced to wait while the PC boots up (e.g., turns on) and then wait again when the PC shuts down (e.g., turns off). After 30 years of PC development, it has been said that the large revenues from selling PC operating systems forces users to see and use (and endure the frustrations of) a PC operating system—a component every other consumer electronic device has embedded and made invisible (at far lower revenues than the PC's operating system vendor receives).

FIG. 49, “Continuous Digital Reality (Auto On-Off)”: In some examples digital reality works in a parallel way to physical reality (which is always present without needing to be turned on and off). In some examples a TP device is on and includes an SVS or another type of in-use detector, including in some examples a detector or subsystem that can determine the identity of a user. In some examples said detector(s) determine that a device is no longer in use, and in some examples device use is manually suspended, and in some examples the device's current state is then saved as part of putting a device in a suspended state. In some examples use begins with a suspended device such as by entering a room where said device is present but suspended, and in some examples a detector recognizes both presence and identity and retrieves said identity's saved state. In some examples a device is in use by an identity, and said identity begins use of a second device, and in some examples the second device's detector recognizes both presence and identity and retrieves said identity's current state, and in some examples retrieves said identity's most recently saved state. In some examples detection is performed without recognition, or in some examples detection and recognition are performed but a user wants to use a different identity; in some examples a user therefore performs login and authentication, and the new identity's last saved state is retrieved and restored. In some examples the result is automated simultaneous digital reality by a plurality of devices, and in some examples the result is manually directed digital reality by a plurality of devices.

Turning now to FIG. 49, “Continuous Digital Reality Subsystem/Service (Automated On-Off Subsystem),” in some examples an LTP 1481 may include continuous digital reality/automated on-off as one or a plurality of subsystems; in some examples an MTP 1481 may include continuous digital reality/automated on-off as one or a plurality of subsystems; in some examples an RTP 1482 may include continuous digital reality/automated on-off as one or a plurality of subsystems; in some examples an AID/AOD 1483 that is running a VTP may include continuous digital reality/automated on-off as one or a plurality of subsystems, in some examples a TP subsidiary device 1485 that is running RCTP may include continuous digital reality/automated on-off as one or a plurality of subsystems, in some examples another type of electronic device(s) that are enabled with an in-use detector 1488 1495 (such as in some examples an SVS, in some examples a motion detector, and in some examples another type of in-use detector) may include continuous digital reality and/or automated on-off as one or a plurality of subsystems; and in some examples another type of electronic device that is enabled with an in-use detector and user recognition (for more secure on/off) may include continuous digital reality and/or automated on-off as one or a plurality of subsystems. In some examples said devices 1481 1482 1483 1485 are connected by one or a plurality of disparate networks 1480; in some examples parts of a continuous digital reality/automated on-off subsystem may be distributed such that various functions (such as in some examples “state” storage, identity recognition, etc.) are located in local and/or remote devices, storage, and media so that various steps are performed separately and link through said network(s) 1480; in some examples the equivalent of a continuous digital reality/automated on-off subsystem may be provided by means other than a device's local subsystem and provided over said network(s) 1480.

Subsystem summary of continuous digital reality/Automated on-off: In some examples a user has one identity, and in some examples a user has multiple identities as described in FIG. 166 through 175 and elsewhere so that in various examples “user(s)” and “identity(ies)” may each be employed to describe continuous digital presence. In some examples said process 1486 includes both continuous digital reality 1486 and automated on/off of continuous digital reality devices, such that a continuous digital reality 1486 is automatically turned on and connected when one or a plurality of appropriate and enabled devices 1481 1482 1483 1485 is in use, in some examples when one or a plurality of said devices is added to use, in some examples when one or a plurality of said devices is present and capable of being used, etc.; and also said continuous digital reality 1486 is automatically saved, suspended and disconnected when the use of, or capability of using one or a plurality of appropriate and enabled devices 1481 1482 1483 1485 is ended—in order to simulate the experience of an “always on” continuous digital reality presence for an identity. In some examples when an identity enters a room 1495 the appropriate and enabled devices 1494 1481 1482 1483 1485 immediately and automatically turn on 1498 and reestablish said identity's current session(s) 1493 1487 as a continuous digital reality; and when said identity exits a room 1488 1489 the appropriate and enabled devices 1481 1482 1483 1485 immediately and automatically suspend their current session(s) 1491 and save that “state” 1493 in local and/or remote storage for retrieval and use by that identity's other appropriate and enabled devices 1494 1495 1481 1482 1483 1485—and as soon said other devices are picked up or other preparation for use is begun 1495, said other devices 1481 1482 1483 1485 immediately and automatically turn on 1495 and reestablish said identity's current session(s) 1496 1498 1493 1487 as a continuous digital reality. In a similar fashion said process may be controlled manually to end use of one or a plurality of appropriate and enabled devices 1490 1491 1492 1493, or to manually change identity when initiating use 1496 1497 1487 of appropriate and enabled devices 1481 1482 1483 1485, or to change identity at any time 1496 1497 1487 during use of said devices; and in some examples when a user changes to a different identity 1496 that other identity's digital reality state(s) is retrieved from local and/or remote storage and reestablished 1493 1487 (in some examples including login and authentication of said different identity to provide security and/or identity control).

Appropriate and enabled devices: In some examples the process 1486 can begin with a device that is on and in use 1487 1481 1482 1483 1485 and has an in-use detector 1488 1495 (which in some examples is an SVS 1488 1495, in some examples a motion detector 1488 1495, an in some examples another type of detector or subsystem that may be used to determine usage 1488 1495 and/or an identity's presence 1488 1495, or other means that determine presence of in some examples a user 1488 1495, in some examples a recognized identity 1488 1495, or in some examples a person in front of a device 1488 1495). In some examples the process 1486 can begin with a device that is on and in use 1487 1481 1482 1483 1485 and has usage detection 1488 such as in some examples a timer that tracks inputs from a user I/O device 1488, or in some examples any other indication of use of a device 1488.

Identity or user detection: In some examples an identity is present 1488 then leaves the detected “presence” 1489 of said device 1481 1482 1483 1485 (including in some examples exiting a room 1489, in some examples putting a portable device away 1489, in some examples other actions that indicate that a device is no longer in use 1489); in some examples that result, said device is automatically put into a suspend state 1491 (which in some examples the device is powered down [such as appearing turned off but being maintained in a ready-to-be-turned-on-immediately state] 1491, in some examples motion detector is active 1491 1488, in some examples use detection is active 1491 1488, in some examples said identity's session is saved 1491 1493 in local and/or remote storage so that it may be restored on the same device or on a different device [as described in FIG. 113 and elsewhere]).

Use detection: In some examples a device 1481 1482 1483 1485 is in use 1487 1488 then an identity or a user stops using said device 1489 (including in some examples not using said device for a period of time 1489, in some examples when a remotely used device 1482 1483 1485 has one or a plurality of remote users, in some examples when a remotely used observation device 1482 has one or a plurality of remote observers, in some examples triggering an indicator that a device is no longer in use 1489 such as in some examples powering down a device, in some examples ceasing another type of active indication that a device is in use 1489); in some examples that result, said device is automatically put into a “suspend” state 1491 that includes saving said device's state (as described in FIG. 113 and elsewhere).

Suspend device: In some examples a device 1481 1482 1483 1485 is in use 1487 1488 and an identity or a user provides a manual command to suspend 1490 1491 1493 said device (with suspend as described elsewhere), which in some examples a suspend command 1490 may be entered by means of a user I/O device 1490 1491 1493, in some examples a suspend command 1490 may be a gesture 1490 1491 1493, in some examples a suspend command 1490 may be verbal 1490 1491 1493, or in some examples a suspend command 1490 may be another type of user indication to suspend use of a particular device 1490 1491 1493—whereby “suspend” includes saving said device's state (as described in FIG. 113 and elsewhere).

Save state: In some examples a device 1481 1482 1483 1485 is in use 1487 1488 and an identity or a user provides a manual command to save the current session and state 1492 1493 of said device (as described in FIG. 113 and elsewhere), which in some examples said save-state command 1490 may be entered by means of a user I/O device 1492 1493, in some examples said save-state command may be a gesture 1492 1493, in some examples said save-state command may be verbal 1492 1493, or in some examples may be another type of user indication to save the current state of a particular device 1492 1493.

Detecting presence at, or use by a powered down or suspended device: In some examples a device 1481 1482 1483 1485 is suspended 1491 1493 as described above so that certain detectors remain active 1494 1495, and is in a powered down state 1494 such as in some examples when no one is present in a room 1488 1489, in some examples when a portable device is closed or put away 1488 1489, in some examples when a remotely used device 1482 1483 1485 does not have any remote users, in some examples when a remotely used observation device 1482 does not have any remote observers, in some examples when a manual suspend command has been issued 1488 1490, in some examples when there is no indication of use 1488, or in some examples where there is another indication (or lack thereof) that causes device suspension 1488 1490 1491 1493 as described elsewhere. In some examples motion is detected 1495 or use is detected 1495 by means such as entering a room 1495, in some examples by taking out a portable device 1495, in some examples by powering on a device 1495, in some examples by opening the top or cover of a device 1495, in some examples by contacting an observation device to begin observing 1495, in some examples by starting to use a user I/O device that sends a command or an indication of use to said device 1495, in some examples other actions that trigger an indication that a user is present or indicates that a device is in use 1489.

Recognition of previous identity(ies): In some examples when presence or use are detected 1495 said device has identity recognition capability 1496 (such as in some examples face recognition 1496, in some examples fingerprint recognition 1496, in some examples other biometric recognition 1496, or in some examples another type of known recognition capability 1496); in some examples said device does not have recognition capability but is linked to a remote device or service that provides identity recognition 1496; and where identity recognition is available either locally or remotely recognition may be performed 1496. In some examples identity recognition is performed 1496 and the identity who was previously using the device is recognized 1498, and the device's previous state(s) and session(s) are retrieved 1493 (as described in FIG. 113 and elsewhere) in some examples from said device's local storage 1493, in some examples from said device's memory 1493, and in some examples from remote storage 1493. In some examples after the previous state(s) and session(s) are retrieved and restored, said device is on and available for use 1487.

Different identity/Not the previous identity(ies): In some examples identity recognition is performed 1496 and the identity who was previously using the device is not recognized 1498, and therefore the device's previous state(s) and session(s) are not restored; in some examples login and authentication 1497 are required to initiate a new session 1497. In some examples said login and authentication 1497 fail and in this case the device returns to a suspended state 1495 awaiting an appropriate indication(s) of presence or use. In some examples said login and authentication 1497 succeed and in this case that other identity's previous state(s) and session(s) are retrieved 1493 and restored for use 1487 (as described in FIG. 113 and elsewhere) in some examples from said device's local storage 1493, in some examples from said device's memory 1493, and in some examples from remote storage 1493. In some examples after said other identity's previous state(s) and session(s) are retrieved and restored, said device is on and available for use 1487.

Automated simultaneous digital reality use by a plurality of devices: In some examples a first device 1487 1481 1482 1483 1485 is in use and a user desires to simultaneously use a second or plurality of appropriate and enabled devices 1496 1481 1482 1483 1485 (herein called “additional device[s]”); in some examples the additional device(s) are turned on automatically by presence or use detection 1495 as soon as they are physically approached 1495, used 1495, powered on 1495, opened 1495, etc. In some examples said additional device(s) have identity recognition capability 1496 (as described elsewhere); in some examples said additional device(s) does not have recognition capability but is linked to a remote device or service that provides identity recognition 1496; and where identity recognition is available either locally or remotely identity recognition may be performed 1496. In some examples identity recognition is performed 1496 and the current identity on said first device is recognized 1498 by said additional device(s); in this case the first device's state(s) and session(s) are accessed and retrieved 1498 1492 1493 1487 by issuing an automated save command 1492 1493 to said first device and performing retrieval 1497 1493 1487 from local and/or remote storage. In some examples after the previous state(s) and session(s) are retrieved and restored 1496 1498 1493, said additional device(s) is on and available for use 1487.

Manual simultaneous digital reality use by a plurality of devices: In some examples the additional device(s) do not include motion detection 1495 and/or use detection 1495 and therefore must be powered on manually rather than automatically. In some examples the additional device(s) do not include identity recognition 1496 and therefore must be logged into 1497 with the identity in use on said first device 1487 1497; in some examples the first device's state(s) and session(s) are accessed and retrieved by issuing a manual save command 1492 1493 to said first device and after login to said additional device(s) 1497 performing retrieval 1497 1493 resuming said 1487 state(s) and session(s) from said first device's stored state(s) and session(s). In some examples after the previous state(s) and session(s) are retrieved and restored 1496 1498 1493, said additional device(s) is on and available for use 1487.

FIG. 50, “TP Device Broadcasts”: In some examples one or a plurality of digital outputs are produced (such as in some examples TPDP events, in some examples RTP places, in some examples constructed digital realities, in some examples streaming TP sources, in some examples TP Broadcasts, in some examples TP directories, and in some examples other digital sources or stored resources created or provided over one or a plurality of networks). In some examples means are provided for distributing said sources and/or resources, and in some examples means are provided for finding said sources and/or resources. In some examples said means include automated metadata naming and tagging, and in some examples said means include manual metadata naming and tagging. In some examples outputs are distributed in real time as they are produced, and in some examples outputs are recorded and stored so they may be scheduled for streamed distribution, or retrieved on demand. In some examples outputs may be associated with schedules, in some examples with alerts, in some examples with trigger events, in some examples with stored finding means (such as in some examples electronic program guides, in some examples topic-based channels, in some examples search engines, in some examples database lookups, and in some examples dashboards), in some examples API's for third-party access, and in some examples by other distribution and finding means. In some examples related information can be provided with output sources or resources, and in some examples links or other means to associate related information can be provided with output sources or resources.

Turning now to FIG. 50, “TP Device Source(s) Output Subsystem,” some examples are illustrated whereby individual, corporate and other types of contributors may make their own sources (such as in some examples TPDP events, in some examples RTP places, in some examples constructed digital realities, in some examples streaming TP sources, in some examples TP broadcasts, in some examples other digital sources created or provided by one or a plurality of types of Teleportal devices as described elsewhere) available to others over one or a plurality of networks. Since Teleportal devices make it possible to support and provide a plurality of existing and new types of streaming sources (such as described elsewhere), said FIG. 50, “TP Device Source(s) Output Subsystem,” illustrates some examples of systems, methods, processes, applications and subsystems that support the distribution of sources created by various types of contributors and their devices.

In some examples this is accomplished by providing means for distributing sources from individual contributors' devices; in some examples one or a plurality of source(s) is provided by an LTP 1501; in some examples one or a plurality of source(s) is provided by an MTP 1501; in some examples one or a plurality of source(s) is provided by an RTP 1502; in some examples one or a plurality of source(s) is provided by an AID/AOD 1503; in some examples one or a plurality of source(s) is provided by a TP subsidiary device 1504; in some examples one or a plurality of source(s) is provided by a server 1505 (which may include in some examples one or a plurality of servers 1505, in some examples an application[s] 1505, in some examples a database[s] 1505, in some examples a service[s] 1505, in some examples a module within an application that utilizes an API to access a server or service 1505, or in some examples another networked means 1505). In some examples said devices 1501 1502 1503 1504 are connected by one or a plurality of disparate networks 1500. In some examples one or a plurality of sources is received by an LTP 1501; in some examples one or a plurality of sources is received by an MTP 1501; in some examples one or a plurality of sources is received by an RTP 1502; in some examples one or a plurality of sources is received by an AID/AOD 1503; in some examples one or a plurality of sources is received by a TP subsidiary device 1504; in some examples one or a plurality of source(s) is received by a server 1505 (which may include in some examples one or a plurality of applications 1505, in some examples a database[s] 1505, in some examples a service[s] 1505, in some examples a module within an application that utilizes an API to access a server or service 1505, or in some examples another networked means 1505). and in some examples one or a plurality of sources is received by another type of networked electronic device or communications device.

In some examples parts of a source's processing, functionality or streaming may be distributed such that various functions (such as in some examples creating a source, in some examples altering or blending a source, in some examples categorizing a source, in some examples tagging a source with metadata so that it is named and/or categorized and may be found, in some examples editing a source's category or metadata, in some examples storing a recorded source for later playback and/or streaming, in some examples storing metadata about a source for finding it, connecting to it [if live] or streaming it on demand [if recorded], in some examples subscribing to alerts from it, or in some examples other features or functions) are located in local and/or remote devices, storage, and media so that various steps are performed by separate devices and communicates through said network(s) 1500; in some examples the equivalent of a TP Device Source(s) Output Subsystem may be provided by means other than a device's local subsystem, such as in some examples a server 1505, in some examples a service 1505, in some examples an application 1505, in some examples a service 1505, in some examples a module within a local application that uses an API to access a server or service 1505, and in some examples by other means that are provided over said network(s) 1500.

Automated metadata naming and tagging: In some examples automated tagging 1507 is provided by streaming a portion of a source and utilizing known content analysis means to identify its components (such as in some examples its GPS location, in some examples identifying it's dominant object(s), in some examples identifying it's dominant identity(ies), in some examples identifying it's dominant brand name(s) or product(s), in some examples performing OCR (Optical Character Recognition) on its visible words, or in some examples performing other types of content analysis and identification), then for said identified content retrieving appropriate tags 1508 (which herein includes tags 1508, metadata terms 1508, event names 1508, said event's schedule 1508, potentially related alerts 1508, appropriate links 1508, etc.). If in some examples said auto-retrieved tags 1508 are added to said source 1507 1508 then automated metadata naming and tagging is complete and said source is ready for streaming 1514.

Manual metadata naming and tagging: In some examples manual tagging 1509 1510 1512 is provided by streaming a portion of a source and utilizing known content analysis means to identify its components (as described elsewhere), then for said identified content retrieving appropriate tags 1509 (as described elsewhere). In some examples one or a plurality of said retrieved tags 1509 are added 1510 1511 by displaying said retrieved tags 1509, selecting the specific tags or categories of tags to be added 1510 1511, and adding the selected tags 1511 to that source. In some examples one or a plurality of said retrieved tags 1509 are edited 1512 1513 before being added by displaying said retrieved tags 1509, selecting a specific tag or category of tag to be added 1512 1513, editing said tag (such as in some examples changing its tag name or other associated metadata) or category (such as in some examples changing its category name or other associated metadata), and adding the selected edited tags 1513 two that source. If in some examples said tags are manually added to said source 1509 1510 1512 then manual metadata naming and tagging is complete and said source is ready for streaming 1514.

Outputs: In some examples sources are distributed in real time as they are produced and processed 1514; in some examples sources are recorded and stored so that they may be scheduled for streamed distribution 1515 by specific means such as on a schedule 1515 1516 1519 by entering one or a plurality of specific date(s) and time(s) to a source 1516 including listing it with various “finding” means 1516 1519 as described elsewhere); in some examples sources are set up to recognize trigger events and then send one or a plurality of alerts 1515 1517 1519 (as described elsewhere which in a brief summary includes identifying specified trigger(s) event(s) 1517, focusing the source when said trigger event[s] occur 1517, and sending alerts to appropriate recipients 1517); in some examples sources are set up 1515 and submitted 1515 1518 to be found by other means 1519 that may utilize one or a plurality of databases 1518 1505 as described elsewhere (such as in FIG. 87 and elsewhere which provides some examples such as PlanetCentrals, GoPorts, alerts systems, maps, dashboards, searches, top lists, APIs for third-party services, an ARM boundary, etc.). In some examples said scheduled outputs stored and accessible by means of one or a plurality of said databases may include one or a plurality of EPG's (EPG [Electronic Program Guides] which may in some examples be a channel set up in some examples by an individual, in some examples by a group, in some examples by a corporation, in some examples by a sponsor such as an advertiser, in some examples by a non-profit organization, in some examples by a governance, in some examples by a government, in some examples by a religious organization, or in some example by another type of EPG creator. In some examples an illustration of an EPG is a channel that provides a “world” to live in digitally such as by providing a type of digital background that a recipient may use to automatically replace other backgrounds; in some examples another illustration of an EPG is a channel that provides education such as in some examples for pre-school age children for continuous automatic replacement of other backgrounds, and in some examples for other grade levels; in some examples another illustration of an EPG is a channel that provides simulated live moving components to include in constructing one's digital backgrounds such as wildlife for naturalists, superheroes for comic book fans, major weapons such as tanks and aerial drones for military fans, and other types of components for other types of interests; and in some examples a plurality of other types of EPGs may be provided. In some examples a collection of channels, each with an EPG, may be provided as a network such as in some examples by an individual, in some examples by a governance, in some examples by a school system, in some examples by a corporation, in some examples by a sponsor, in some examples by a government, and in some examples by another type of source.

As a result in some examples personalized real-time sources 1514, in some examples scheduled sources 1515 1516, in some examples dynamically triggered sources (such as with alerts) 1515 1517, and in some examples “findable” sources may be provided directly to users 1518 1505 or in an accessible networked resource for potential users 1518 1505. In some examples said sources 1514 1515 1516 1517 1518 1505 may have their schedule or metadata information provided on demand by various finding means 1518 1519 1505. With either a current stream 1514 or metadata information 1515 1516 1517 1518 1519 1505 users may be able to branch immediately to perform various functions such as in some examples searching for related sources, in some examples altering an ARM boundary to include or exclude a particular source(s), in some examples adding a source to favorites, in some examples setting a reminder to use a source at a future date/time, in some examples recording a source now in real-time, in some examples scheduling the recording of a source at a future date/time when it is scheduled to be provided (such as on an EPG), etc.

In some examples links may be provided with a real-time source 1514 1519, or in some examples links may be provided with a source's metadata 1518 1519 1505, or in some examples links may be provided with a source's scheduled listing in a “finding” means 1515 1518 1519 1505 such as a top list or an electronic program guide; in these and other examples said links may provide access to related information, in some examples access to related sources, in some examples access to related vendors, in some examples access to related e-commerce purchases, in some examples access to advertisements, in some examples access to marketing information, in some examples access to interactive applications, in some examples access to individuals or identities, in some examples access to directories, in some examples access to pre-defined “canned” searches, etc. These various links may be provided in some examples as a list, in some examples as an interactive application, in some examples as a widget, in some examples as an interface component, in some examples as a portlet, in some examples as a servlet, in some examples as an API, etc.

Physical reality is geographically local, narrow and—unless one or a plurality of the people in a physical place is a traveler—predominantly a single language environment; the local language is typically spoken by everyone. The ARTPM (Alternate Reality Teleportal Machine) illustrates means for SPLS's (Shared Planetary Life Spaces) in which one or a plurality of connections, digital realities, IPTR uses are (optionally) on. These utilize networks so may (optionally and in some examples frequently) include people who are connected but speak different languages, and in some examples connect some people who are fluent in two or a plurality of different languages. Thus, there is a need for simple and direct communications between people who each speak one or a plurality of different languages, with a high level of automation, convenience and flexibility.

FIG. 51, “Language Translation (Automated or Manual Recognition)”: In some examples TP devices connect people who speak different languages, so in some examples language translation is provided. In some examples there is automated recognition and specification of each participant's (different) languages such as in some examples by voice sampling, in some examples by each identity's profile's language settings, in some examples by each identity's location settings, in some examples by other automated means or stored data; and in some examples there is manual recognition and specification of each participant's language(s). In some examples as each participant enters a communication language recognition automatically determines the participant's language, and in some examples that determination is performed manually. Said recognized language for each participant is used for both that participant's input to language translation, and for that participant's output from language translation. In some examples an automated language translation process adjusts the translation mapping as a plurality of participants enter or exit a communication, so that each participant's speech is received and translated and output as needed for each of the other participants. Said translations are performed in parallel so that a plurality of participants each speaks and hears in their own respective and different languages. In some examples language translation and speech synthesis are performed by any of a variety of means. In some examples language translation is performed on text, on documents, on presentations, and on other digital formats in addition to spoken language. In some examples language translations may also be recorded as text in one or a plurality of languages, so as to produce a transcript of a communication in one or a plurality of languages for the respective participants in the communication.

Turning now to FIG. 51, “Language Translation (Automated or Manual Recognition),” some examples are illustrated in which there is automated recognition of different languages (by voice sampling) or automated recognition of each known identity's language settings (by utilizing profile settings or other stored data), with automated language translation; some examples in which there is automated recognition of different languages or automated recognition of each known identity's language settings, with manual override to turn off automated translation; and some examples in which there is manual recognition of different languages, with automated translation. As a result both logged in users and anonymous users who speak different languages from each other can communicate in their native languages with (optional) automated language recognition and language translations so they are each able to speak and hear each other in a language in which they are fluent.

In some examples an LTP 1531 may include language recognition 1541 and/or language translation 1540; in some examples an MTP 1531 may include language recognition 1541 and/or language translation 1540; in some examples an RTP 1532 may include language recognition 1541 and/or language translation 1540; in some examples an AID/AOD 1533 that is running a VTP may include language recognition 1541 and/or language translation 1540; in some examples a TP subsidiary device 1534 (as described elsewhere) that is running RCTP may include language recognition 1541 and/or language translation 1540; in some examples one or a plurality of networked systems 1535 may include language recognition 1541 and/or language translation 1540 (such as in some examples a server[s] 1535, in some examples an application[s] 1535, in some examples a database[s] 1535, in some examples a service[s] 1535, in some examples a module within an application that utilizes an API to access a server or service 1535, or in some examples another networked means 1535); in some examples other known devices may include language recognition 1541 and/or language translation 1540 such as in some examples a mobile cellular telephone; in some examples a landline phone utilizing POTS (Plain Old Telephone Service); in some examples a PC computer, laptop, netbook, pad or tablet, or another device that includes communications; in some examples language recognition may be provided as a network subsystem 1535 1536 1541, a network service 1535 1536 1541, or by other remote means over a network 1535 such as an application, a translation server, etc.; in some examples language translation may be provided as a network subsystem 1535 1536 1540, a network service 1535 1536 1540, or by other remote means over a network 1535 such as an application, a translation server, etc.; in some examples another type of networked electronic device 1534 may include language recognition 1541 and/or language translation 1540.

In some examples automated language recognition 1541 and/or language translation 1540 (which are herein collectively known as a “translation subsystem” 1536) may take the form of an entirely hardware embodiment that is located in one or a plurality of locations and provided by one or a plurality of vendors, in some examples an entirely software embodiment that is located in one or a plurality of locations and provided by one or a plurality of vendors, or in some examples a combination of hardware and software that is located in one or a plurality of locations and provided by one or a plurality of vendors; in some examples automated language recognition 1541 and/or language translation 1540 may take the form of a computer program product (e.g., an unmodifiable or customizable computer software product) on a computer-readable storage medium; and in some examples automated language recognition 1541 and/or language translation 1540 may take the form of a web-implemented software product, module, component, and/or service (including a Web service accessible by means of an API for utilization by other applications and/or services, such as in some examples communication services). In some examples said devices, hardware, software, systems, services, applications, etc. 1536 are connected by one or a plurality of disparate networks 1530; in some examples parts of said language recognition 1541 and/or language translation 1540 may be distributed such that various functions are located in local and/or remote devices, storage, and media so that various steps are performed separately and link through said network(s) 1530; in some examples the equivalent of said language recognition 1541 and/or said language translation 1540 may be provided by means other than exemplified herein and provided over are said network(s) 1530.

As a process, method and/or system (which may be implemented in a machine, hardware, software, service, application, module or by other means), language recognition 1541 may be automated or manually controlled. It includes steps such as identifying a fluent language for each Participant in a communication, and automatically assigning a translation function when the fluent language of the respective Participants differ, and that effects a translator function (or subsystem, application, etc.) to be inserted into the spoken and/or text communications between those respective Participants.

In some examples the process 1536 begins when one or a plurality of participants enters 1537 or exits 1537 a focused connection or another type of electronic communication over a network (herein collectively named a “communication” 1537), such as in some examples Participant 1 speaks English 1538, in some examples Participant 2 speaks English 1539, in some examples Participant 3 speaks Spanish 1542, and in some examples Participant 4 speaks French 1543; while in some examples each additional Participant N may speak another and different language 1544. In some examples as each Participant 1 through N 1538 1539 1542 1543 1544 enters said communication 1537 a language recognition process 1541 automatically determines at least one of each new Participant's fluently spoken language(s). In some examples as each Participant 1 through N 1538 1539 1542 1543 1544 enters said communication 1537 a language recognition process 1541 does not determine a new Participant's language but instead waits for a manual indication of a Participant's language by means of a user interface or command, in order to determine which language translation is needed by each Participant. Said language translation user interface may also receive and employ other known translation instructions or commands such as in some examples source language(s), target language(s), transcription (as described below), e-mail transcription, archive transcription, archive recorded communication, a repeat and clarify option, a repeat and re-translate option, a translate file or attachment option, and/or other language translation options.

In some examples of an automated language recognition process 1541, as each Participants speaks voice sampling is performed by known means to determine at least one of each Participant's fluently spoken language(s) 1541, and said language data may be used both for input language recognition and/or for output language generation. In some examples of an automated language recognition process 1541, each Participant's identity is known (such as in some examples if they are members of an SPLS, in some examples if they are employees of a Corporation and logged into a corporate network, and in some examples by other identification means); in such a case the language recognition process 1541 may (optionally) determine the identity of a new Participant 1538 1539 1542 1543 1544, retrieve said identity's directory entry, user profile data or other identity data; and in some examples utilize a “native language” attribute in said Participant's retrievable data to determine at least one of each Participant's fluently spoken language(s) 1541. In some examples of an automated language recognition process 1541, each Participant's identity is known (as described elsewhere) but one or a plurality of Participants does not have a retrievable “native language” data attribute; in such a case the language recognition process 1541 may (optionally) determine a likely fluent spoken language language for said new known Participant by utilizing retrievable identity data such as in some examples a current home address, in some examples a current business or work address, in some examples a current telephone country code and/or area code, in some examples GPS data such as provided by a cellular telephone, in some examples of GPS data such as provided by another type of device, and in some examples other retrievable location indicating data to determine at least one of each Participant's fluently spoken language(s) 1541 in that geographic region.

In some examples of an automated language recognition process, as Participant 1 1538 and Participant 2 1539 communicate directly, an automated language recognition process 1541 would recognize that Participant 1 speaks English 1538 and Participant 2 also speaks English 1539, in which case all the Participants speak the same language and said language recognition process 1541 would not initiate language translation 1540; in addition, said automated language recognition process 1541 would not perform another language recognition 1541 until a Participant enters 1537 or exits 1537 said communication 1538 1539.

In some examples of an automated language recognition process, as Participant 1 1538 and Participant 2 1539 communicate directly, Spanish-speaking Participant 3 1542 is present from the beginning of a communication 1538 1539 1542, and in some examples Spanish-speaking Participant 3 1542 joins a single language (English) communication after it has begun; in either case an automated language recognition process 1541 recognizes that Participant 1 speaks English 1538 and Participant 2 also speaks English 1539 but Participant 3 1542 speaks Spanish; in which case said automated language recognition process 1541 would map the input and output language(s) of each Participant and initiate language translation 1540; as a result, Participant 3's 1542 spoken and/or written communications would be translated into English by a translation subsystem 1540 before being received by Participant 1 1538 and Participant 2 1539; in parallel, it would initiate language translation 1540 such that Participant 1's 1538 and Participant 2's 1539 spoken and/or written communications would be translated into Spanish by a translation subsystem 1540 before being received by Participant 3 1542; in addition, said automated language recognition process 1541 would not perform another language recognition 1541 until a Participant enters 1537 or exits 1537 said communication 1538 1539 1542.

In some examples of an automated language recognition process, as Participant 1 1538 and Participant 2 1539 and Participant 3 1542 communicate directly, French-speaking Participant 4 1543 is present from the beginning of a communication 1538 1539 1542 1543, and in some examples French-speaking Participant 4 1543 joins a two-language (English and Spanish) three-party communication after it has begun; in either case an automated language recognition process 1541 recognizes that English is spoken by Participants 1 1538 and 2, Spanish is spoken by Participant 3 1542, and French is spoken by Participant 4 1543; in which case said language recognition process 1541 would initiate language translation 1540 such that Participant 3's 1542 spoken Spanish communications and/or written Spanish communications would be translated into English for Participants 1 1538 and 2 1539, and into French for Participant 4 1543, by a translation subsystem 1540 before being received by Participants 1 1538 and 2 1539 and 4 1543; in parallel, it would initiate language translation 1540 such that Participant 4's 1542 spoken French communications and/or written French communications would be translated into English for Participants 1 1538 and 2 1539, and into Spanish for Participant 3 1542, by a translation subsystem 1540 before being received by Participants 1 1538 and 2 1539 and 3 1542; in parallel, it would initiate language translation 1540 such that Participants 1's 1538 and 2's 1539 spoken English communications and/or written English communications would be translated into Spanish for Participant 3 1542, and into French for Participant 4 1543, by a translation subsystem 1540 before being received by Participant 3 1542 and by Participant 4 1543; in addition, said automated language recognition process 1541 would not perform another language recognition 1541 until a Participant enters 1537 or exits 1537 said communication 1538 1539 1542.

In another example, an automated language recognition process 1541 would adjust the translation mapping 1540 as Participants 1 through N 1538 1539 1542 1543 1544 enter 1537 or exit 1537 communication in order to provide parallel and simultaneous translation(s) 1544 for each of the Participants in said communication. In some examples entering 1537 a communication may mean an appropriate translation indication as described elsewhere. In some examples exiting 1537 may mean leaving a communication 1537 1541, and in some examples exiting 1537 may mean temporarily suspending a communication (including in some examples exiting a room, in some examples putting a portable communication device away, in some examples logging out as an identity, in some examples a manual suspend command, in some examples other actions that indicate that a device is no longer in use such as by that device entering a suspended state, or in some examples other temporary suspend use indicators as described elsewhere).

In some examples known means are used to store, retrieve and process the respective language designation of each of the Participants in a communication; in some examples known means are used to transmit to each calling device in a communication one or a plurality of Participants' language designation(s) such that said designation(s) may be stored, retrieved and used to process the respective translation(s) required to receive each Participant's spoken and/or text communications; in some examples known means are used to transmit to each calling device in a communication one or a plurality of Participants' language designation(s) such that said designation(s) may be stored, retrieved and used to process the respective translation(s) required to transmit that Participant's spoken and/or text communications. In some examples known means are used to transmit to each calling device in a communication one or a plurality of Participants' language designation(s) such that each device may provide appropriate and separate language processing when various components are distributed to the respective devices (such as spoken translation and/or text translation).

In some examples known means are used to transmit to each calling device in a communication one or a plurality of Participants' language designation(s) such that said designation(s) may be manually modified or controlled by each Participant in a communication. In some examples a calling device(s) and a called device(s) are in one or a plurality of different communication systems and known means are used to transmit the one or a plurality of Participants' language designation(s) according to the call signaling of each respective communication system.

In some examples of networked communications a translation function 1540 is dynamically inserted in a communication for translating spoken and/or text communications that are directed to a Participant into a language in which that Participant is fluent. In some examples communications are direct between devices but by means of a language recognition function in one or a plurality of said communicating devices 1541, a translation service(s) 1540 may be automatically or manually inserted in said direct communications (as described elsewhere). In some examples each Participant's device 1538 1539 1542 1543 1544, each language recognition component 1541, and each translator 1540 (whether a translation subsystem, a translation service, a translation module, a translation application, or another known translation means) may use the same local or distributed set of language translation components, or alternatively may use a different set of local or distributed language translation components, in order to effect real-time translation or near real-time translation; with the distribution of various functional components not limiting the implementation of language recognition 1541 and/or language translation 1540.

In some examples a plurality of language translations 1540 are performed in parallel so that a plurality of Participants in a communication, who are each fluent in a different language may simultaneously receive each spoken and/or text communication in their respective and different languages; which may be effected in some examples by parallel processing, in some examples by multiple sound cards, in some examples by multiple processors, in some examples by software controlled switching techniques, in some examples by multiple translation subsystems, in some examples by multiple translation services, and in some examples by other known means. In some examples spoken translation includes any form of speech, conversation, verbal presentation, voicemail, voice messages, voice commands, one or a plurality of data packets that encapsulate a voice signal, or other types of verbal communications. In some examples text translation includes any form of non-spoken content such as IM (Instant Messaging), chat, e-mail messages, fax (facsimile), SMS, an electronic file (such as an e-mail attachment), and electronic language file (such as for sign language or Braille), or other types of text-based messages and/or non-spoken content. In some examples a translated language(s) includes one or a plurality of Participants utilizing a dialect such as in some examples a non-standard variety of a language that is used by one ethnic or regional group of a language's speakers, in some examples a non-standard variety of a language that is used by a social class within a society, in some examples the heavy use of non-standard words such as slang, or in some examples another type of non-standard variety of a language. In some examples a translated language(s) includes one or a plurality of Participants utilizing a non-spoken language such as in some examples encoded sign language, in some examples Braille, and in some examples another type of non-spoken language.

In some examples said language translation 1540 is performed by known means: In some examples language translation 1540 includes separate translators such as in some examples at least one translator for spoken language, in some examples at least one translator for text language, in some examples at least one translator for dialects, and in some examples at least one translator for non-spoken languages. In some examples language translation 1540 produces translated output in a second language that is derived from speech input in a first language, by means of said speech input signal converted into a digital format with a voice model that includes a content component and a speech pattern component, whereby the content component is translated from the first language into the second language, and an audible output signal is generated that comprises the translated content with an approximation of the speech input signal's speech pattern. In some examples language translation 1540 comprises distributed components that include a real-time translator that has a microphone (or another voice receiver) at a calling device, a converter that converts voice to text, a text-to-text translator that receives the input of a first language and translates it to a selected second language, a converter that converts text to voice for producing audible output of said translation in a second language, and a speaker (or another voice emitter) for playing the voice output at a called device, with said conversion components and translation components distributed so as to effect the translation process. In some examples language translation 1540 may be resident at one or a plurality of host computers, or at one or a plurality of networked data centers, where each language input from a Participant is speech that is processed by speech recognition, translated into one or a plurality of output languages, and said translation is processed by speech generation before each appropriate translated and generated second language speech is transmitted to each appropriate second language-speaking Participant, where it is played or recited by the called device. In some examples language translation 1540 may include components such as speech conversion, language conversion, language translation, transcription, speech generation, language generation, a language translation user interface, and/or one or a plurality of language databases. In some examples language translation 1540 includes speech recognition based on a combination of attributes such as semantics and syntax to map received speech to machine-readable intermediate data that indicates words and/or sounds in a first language (such as English) from a first Participant, whereby said indicated words may be translated into a second language (such as Spanish) for a second Participant that correspond to the sounds and words in the first language, and then generates a translated audio voice signal in the second language that is audibly played for the second Participant in real time (or in near real time). In some examples language translation 1540 receives live speech, converts the speech to text, translates the text into one or a plurality of different languages of the Participants, and then in some examples transfers a translated text to each second language Participant in that Participant's language, or in some examples utilizes said translated text to generate and transmit synthesized speech in each second language Participant's fluent language in such a case either one or both text and/or generated speech may be provided. In some examples language translation 1540 includes recognizing phrases and sentences (rather than only words) in a naturally spoken first language to determine some expressions and/or meanings that are used to determine recognition hypotheses from general language models of the source language; when source expressions are determined they may be translated into similar expressions in a second language so that the speaker's intended meaning(s) may be more accurately provided in the second language translation. In some examples language translation 1540 receives speech signal in a first language from a first Participant, converts it to text, translates that text into a second language, and displays that translated second language as closed captioned text overlaid on the visual image of the first Participant speaking the untranslated speech. In some examples language translation 1540 may use any translation software, system, method, process, service or other known means to effect the required translation(s).

In some examples speech synthesis may correspond to and reflect the vocal and audio characteristics of the respective Participants in a communication. In some examples language translation 1540 may include a known profile of one or a plurality of Participants so that speech synthesis may automatically select an audible voice that reflects each speaking Participant's gender, age, weight, etc. In some examples language translation 1540 may include voice analysis of one or a plurality of Participants so that speech synthesis may automatically select an audible voice that corresponds to the speaker's voice tone and quality, so that said voice selection approximates as best as possible the sound of the voice of each Participant.

In some examples speech synthesis may correspond to and reflect the visual characteristics of the respective Participants in a communication. In some examples language translation 1540 displays and speaks a completed translation in a second language by means of a visual animated display such as an animated character image (that in some examples corresponds to a speaking Participant's age, sex and/or weight; wherein the animated character's mouth moves appropriately when speaking the words and sounds in the second language's translation; in addition, in some examples other facial features may also be animated to display facial characteristics that relate to the speaker's speech pattern such as inflection or tone. In some examples such an animation may accurately reflect at least some of the real first Participant's real facial appearance, real mouth movements, and/or other real facial expressions whereby some of their movements may be correlated when speaking the translation to the inflections that the first Participant used to say specific words or phrases while speaking the source statement in the first language (in other words, a dynamic near real-time animation may include a likeness or appearance of the first speaker).

In some examples language translation 1540 may include a transcription component that produces a saved transcript in one or a plurality of languages, with said saved transcripts archived such that each transcript in each language is searchable, retrievable in whole or in part, downloadable, automatically e-mailed, or otherwise accessible to one or a plurality of Participants, or to others who may be interested in a particular communication; and in addition, said transcription component may be configurable by a user interface or by commands to display the communication's transcript on one or more networked devices while a communication occurs. In some examples said transcription component of language translation 1540 may be available when translation is not utilized such as during a communication that is only between English speaking Participants 1538 1539, and said transcription component may be utilized to produce a saved transcript in the Participants' language, with said saved transcript archived such that it is searchable, retrievable in whole or in part, downloadable, automatically e-mailed, or otherwise accessible to one or a plurality of Participants, or to others who may be interested in a particular communication; and in addition, said transcription component may be configurable by a user interface or by commands to display the communication's transcript on one or more networked devices while a communication occurs.

FIG. 52, “Speech Recognition Interactions” illustrates speech recognition, which is one of a plurality of ARTPM user I/O capabilities (as described elsewhere), that in some examples converts spoken words to text, in some examples converts spoken words to device instructions or commands, in some examples provide text input, and in some examples includes two-way interactions with a device that employs speech synthesis to produce responses. In some examples an LTP, MTP, RTP, AID/AOD that is running a VTP, a TP subsidiary device run by RCTP, networked systems, or another type of electronic device may include speech recognition. In some examples a device has a microphone, an audio speaker and a speech recognition and speech synthesis system, and in some examples a device has a microphone, an audio speaker and networked communications that can transmit voice data for networked speech recognition and speech synthesis processing. In some examples users start speech recognition by a verbal indication, in some examples by a physical indication means, in some examples by a software indication means, and in some examples by another type of indication. In some examples speech services processing is performed by a speech recognition system in the local device, and in some examples speech services processing is performed by networked speech recognition processing with two-way communications. In some examples a spoken instruction are matched with a speech recognition vocabulary, which in some examples is contextual and appropriate to when a user utilizes a device to perform different types of operations. In some examples speech recognition is performed by one or a plurality of known speech recognition means, methods, processes, or systems. In some examples speech recognition fails; in some examples a speech recognition engine may attempt to determine the cause of the failure and provide audio, visual and/or other means to correct it. In some examples a visual and/or audio indication is provided by one or a plurality of means that speech recognition succeeded. In some examples after speech recognition succeeds a recognized instruction(s) is matched with the corresponding device command(s) which are utilized to perform the instruction(s) and show the result. In effect, device performance is directed by spoken interactions with any needed corrective actions, indications of success and the results produced.

FIG. 53, “Speech Recognition Processing,” illustrates some examples where speech recognition processing 1582 1583 is performed as described above, including corrective actions if it fails. In some examples after speech recognition of a user's instruction(s) succeeds the recognized instruction(s) is matched with the appropriate device command(s), which perform the task or instruction. In some examples the result of the user's verbal instruction are confirmed verbally, visually or by other means such that the effect of the user's spoken direction(s) are clearly indicated so the user knows the device has performed the proper and correct action(s). In some examples a user may choose to use speech entry of dictated text to perform text entry such to verbally enter words and numbers in fields, to enter text in a memo or e-mail, or to enter text for another purpose. In some examples the result of spoken text entry is indicated clearly such as by displayed text, by synthesized speech, or by other means so the user knows the device has performed the proper and correct action(s). In some examples different speech recognition processing may provide different types of speech recognition such as local device speech recognition may match user instructions against a controlled vocabulary that is locally stored, while networked speech services provide text entry that provides recognition by means of a large vocabulary whose breadth includes both an entire language and multiple languages.

FIG. 54, “Speech Recognition Optimizations,” illustrates some examples of optimizations (which are described elsewhere in more detail) including both automated optimization means and manual optimization means. In some examples speech interactions may be optimized by collecting and recording failed attempts; by categorizing failures into groups (such as by content analysis or other means), and by ranking categories of failures such as by each category's rate of failure. In some examples optimization proceeds by identifying failures and subsequent successes, collecting and recording said successes, and associating successes with categories of failures to create parallel categories of recorded successes, then ranking successes by each's rate of success. In some examples specific types of successes may be tested by automated means and/or by manual means to determine which produce a higher rate of user success, and to adapt the speech recognition system to employ those and produce a higher rate of user success.

Speech recognition provides benefits such as in some examples enabling hands-free device control and device interactions while engaged in other activities; in some examples a simplified and consistent command vocabulary that can be distributed to multiple devices for ease-of-use when utilizing a new device; in some examples the ability for some devices to respond such as in some examples by validating a command before executing it, in some examples to use voice interaction to obtain supplementary data or correct insufficient data, in some examples to display or verbalize an expanded task-specific vocabulary of local commands when a user performs a specific type of task, and in some examples perform other types of verbal operations that expand ease-of-use, accessible functions, etc.

Turning now to FIG. 52, “Speech Recognition Interactions,” some examples are illustrated in which there is automated speech recognition and automated speech synthesis that in some examples provide at least some verbal control of a device, in some examples provide text input where text is utilized, and in some examples provide other uses (collectively referred to herein as “speech recognition”). In some examples an LTP 1551 may include speech recognition 1558; in some examples an MTP 1551 may include speech recognition 1558; in some examples an RTP 1552 may include speech recognition 1558; in some examples an AID/AOD that is running a VTP 1553 may include speech recognition 1558; in some examples a TP subsidiary device 1554 (as described elsewhere) that is running RCTP may include speech recognition 1558; in some examples one or a plurality of networked systems 1556 (such as in some examples a server 1556, in some examples an application 1556, in some examples a database 1556; in some examples a service 1556, in some examples a module within an application that utilizes an API to access a server or service 1556, or in some examples another network means 1556); in some examples another type of electronic device such as in some examples an AKM device 1554 (as described elsewhere) may include speech recognition 1558; in some examples another type of networked electronic device 1554 may include speech recognition 1558, or in some examples speech recognition may be provided for a networked electronic device 1554 (such as in some examples an AKM device 1554) by a network subsystem 1556, a network service 1556, or by other remote means over a network 1556 such as an application, a speech recognition server, etc.

In some examples speech recognition 1558 may take the form of an entirely hardware embodiment that is located in one or a plurality of locations and provided by one or a plurality of vendors, in some examples an entirely software embodiment that is located in one or a plurality of locations and provided by one or a plurality of vendors, or in some examples a combination of hardware and software that is located in one or a plurality of locations and provided by one or a plurality of vendors. In some examples speech recognition 1558 may take the form of a computer program product (e.g., an unmodifiable or customizable computer software product) on a computer-readable storage medium; and in some examples speech recognition may take the form of a web-implemented software product, module, component, and/or service (including a Web service accessible by means of an API for utilization by other applications and/or services, such as in some examples communication services). In some examples said devices, hardware, software, systems, services, applications, etc. 1558 are connected by one or a plurality of disparate networks 1550; in some examples parts of said speech recognition 1558 may be distributed such that various functions are located in local and/or remote devices, storage, and media so that various steps are performed separately and link through said network(s) 1550; in some examples the equivalent of said speech recognition 1558 may be provided by means other than exemplified herein and provided over said network(s) 1550.

In some examples speech recognition 1558 begins when a speaker interacts verbally with a device that has a microphone, an audio speaker and a speech recognition system 1559; and in some examples speech recognition 1558 begins when a speaker interacts verbally with a device that has a microphone, an audio speaker and networked communications that can transmit voice data 1559 1562 for networked speech recognition processing. In some examples to start speech recognition a user speaks an appropriate command word that initiates speech recognition followed by a task instruction, such as in some examples “(device name) (command) (object)” such as “Teleportal focus the connection with Jane,” which in some examples instructs a device (a Teleportal) to perform an action (from a currently open SPLS, focus the current live connection with the SPLS member named Jane). In some examples a command word is not needed and instead one or a plurality of speech recognition indications are provided such as in some examples by using a pointing device to highlight an indicator such as a speech recognition icon, in some examples by a gesture, in some examples by a predefined type of touch on a screen or icon or button, in some examples by a predefined button or touch on a remote control, in some examples by a predefined physical indicator such as by means of a user I/O device, in some examples by means of a predefined software indicator such as a user interface element, and in some examples by another indication means.

In some examples speech services processing 1563 1564 1565 is performed by a speech recognition system in the local device 1560; and in some examples speech services processing 1563 1564 1565 is performed by networked speech recognition processing with two-way voice communications 1561 1562. In some examples a spoken command word and instruction are matched with a speech recognition vocabulary which in some examples is stored in a local device 1560 1563, in some examples is stored by networked speech recognition processing 1561 1562 1563, and in some examples is stored by a combination of a local device 1560 1563 (for a shorter response time) and networked speech recognition processing 1561 1562 1563 (for a broader range of speech recognition capabilities, algorithms and vocabularies).

In some examples to increase recognition accuracy and speed, speech services processing 1563 is contextual 1564 such as when a user utilizes a device to perform different types of operations. In some examples based on a setting or use of an element in the user interface, the selection of an operation causes the display of a set of contextually appropriate commands 1564 and instructions 1564 in a proximate location to the portion of a display where that selected operation is located; and in some examples said list of contextually appropriate commands 1564 and instructions 1564 dynamically adapts to the user's words while issuing a command so that both valid and likely speech recognition instructions options are presented at all times. In one illustration of one type of operation such as a focused communication 1564, certain commands are more likely and may be displayed for verbal use and more accurate recognition 1565 such as in some examples “Teleportal increase volume,” “Teleportal change background to [say location, like ‘the Lincoln Memorial in Washington, D.C.’],” “Teleportal start recording,” “Teleportal end focused connection,” etc.). In a second illustration of a second type of operation such as constructing a digital reality 1564, different commands are more likely and may be dynamically adapted to the current stage of a task for greater relevance and recognition 1565 such as in some examples “Teleportal display RTP views of Times Square,” “Teleportal select aerial view 4,” “Teleportal change all advertising displays [name a product such as Coca-Cola or a person such as your sister],” “Teleportal broadcast this digital reality with the name ‘It's Jane's day in Times Square’],” etc.). In a third illustration of a third type of operation such as editing a boundary Paywall 1564, different commands are more likely and may be dynamically displayed based upon previous types of Paywall edits which that user has performed for greater personalization and recognition 1565 such as in some examples “Teleportal list brands blocked from this identity's digital realities,” “Teleportal add Kellogg's to the list of blocked brands,” “Teleportal respond to Kellogg's ads and product images with my usual Paywall payment offer,” etc.). In a fourth illustration a Context Free Grammar (herein CFG) may be employed to limit the vocabulary and syntax to a narrow set that fits numerous application states such as start, stop, focus, end focus, record, stop recording, add background, change background, remove background, etc.

In some examples after each command and instruction speech recognition is performed 1565 by matching the instruction against that context 1564 and that context's vocabulary 1564; in some examples by matching each instruction against a controlled vocabulary 1565 (including “fuzzy” matching in some examples); in some examples by transforming digital audio into an acoustic representation, extracting phonemes, applying a “grammar” to determine which phonemes were spoken, and to convert phonemes into words 1565; in some examples by using a hidden Markov model 1565; in some examples by permitting continuous dictation in certain instances such as to transcribe text input into a field or a text zone 1565; in some examples by permitting the recognition of continuous speech under any and all conditions 1565; and in some examples by utilizing another process by which a device and/or local or remote system utilize speech as a means of issuing commands, entering data input, or converting speech to text 1565.

In some examples a visual or audio indication is provided that recognition succeeded 1566 which in some examples may be by performing the instruction 1569, visibly showing the result 1569 and awaiting the next instruction 1569; in some examples an indication may be showing a success icon or image known to the local culture such as a green check mark 1569; in some examples an indication may be synthesizing and “voicing” a verbal reply such as “Done. Say undo, or what to do next” 1569; in some examples by highlighting the instruction that was just performed such as a background that was replaced 1569; in some examples by another type of indication 1569; and in some examples by a combination of two or more types of indications 1569 such as in some examples showing the result, highlighting it and displaying a green check mark next to it 1569.

In some examples speech recognition fails 1566 such as in some examples because the speaker's word(s), language or accent were not understood 1566; in some examples a controlled vocabulary did not include the speaker's words 1566; or for another reason that an instance of speech recognition might fail 1566. At the occurrence of a failure 1566 this speech recognition engine attempts to determine the cause of the failure 1567 and in some examples select a clarifying request 1567 or question 1567; in some examples generate a clarifying question or request 1567; in some examples select a short list of the most likely valid instructions 1567; or in some examples utilize a different type of prompt or corrective action. Said request 1567 or question 1567 is synthesized as speech 1568 and transmitted as a response to be played by the audio speaker(s) of the user's device 1559, so that the user may attempt to respond appropriately 1559 and speech services processing 1563 may re-attempt speech recognition 1565 of said user's reply. Alternatively, the list of the speech engine's best guess of valid instructions 1567 may be transmitted 1568 and displayed 1559 for the user to select and say one of the instructions 1559, or for the user to construct a different instruction that resembles the examples displayed 1559, and speech services processing 1563 may re-attempt speech recognition 1565 of said user's reply. Alternatively, in some examples optimizations 1570 may (optionally) be performed as described in FIG. 54.

In some examples after speech recognition succeeds 1565 1566 the recognized instruction(s) 1566 is matched with the appropriate device command(s) 1569 and are utilized to perform the instruction 1569 and show the result 1569. In effect, device performance 1569 is directed by spoken interactions 1559 with repeated indications of success 1569 and the results produced 1569 when speech succeeds, and recovery actions 1567 1568 when it fails. In addition, in some examples clear and visible guidance such as contextually valid and appropriate instructions may be displayed as a default setting or as a recovery action at any time guidance is desired or helpful. In some examples visible, appropriate and sequenced speech instructions guidance may be set to display whenever a user starts an unfamiliar task such as in some examples constructing a new digital reality, in some examples setting one or a plurality of boundaries that control what is included and what is it excluded from an identity's digital realities, in some examples copying an entire set of personal boundaries that have been proven to produce high revenues for their users, or in some examples starting another type of unfamiliar task. In some examples these sequenced speech instructions may be downloaded to a device as needed from an AKM (as described elsewhere) when a user starts an unfamiliar task. Therefore, in some examples a device such as a Teleportal may offer a wide range of capabilities to a novice user, but simultaneously provide means to enable potential performance success when attempting a new task for the first time.

Turning now to FIG. 53, “Speech Recognition Processing,” some examples are illustrated of processing speech recognition 1580. In some examples as described elsewhere one or a plurality of TP devices 1576 may include speech recognition such as in some examples an LTP 1576, in some examples an MTP 1576, in some examples an RTP 1576, in some examples an AID/AOD that is running a VTP 1576, in some examples a TP subsidiary device 1576 (as described elsewhere) that is running RCTP, in some examples one or a plurality of networked systems 1576 1577 (as described elsewhere); in some examples another type of electronic device such as in some examples an AKM device 1576 (as described elsewhere), in some examples another type of networked electronic device 1576, or in some examples another type of networked electronic device 1576 (such as in some examples an AKM device 1576) by a network subsystem 1576 1577, a network service 1576 1577, or by other remote means over a network 1576 1577 such as an application, a speech recognition server, etc.

In some examples speech recognition processing 1581 begins when a speaker interacts verbally with a device that has a microphone, an audio speaker and a speech recognition system 1581; and in some examples speech recognition 1581 begins when a speaker interacts verbally with a device that has a microphone, an audio speaker and networked communications that can transmit voice data (as described elsewhere) for remotely located, networked speech processing.

In some examples speech services processing 1582 1583 is performed as described elsewhere (such as in some examples by a speech recognition system in the local device 1582; and in some examples speech services processing is performed by networked speech recognition processing 1582 with two-way voice communications; in some examples by a spoken command word and instruction that are matched with a speech recognition vocabulary 1583; in some examples to speech services processing 1583 is contextual; in some examples speech services processing 1583 is performed by another speech recognition means as described elsewhere). In some examples speech recognition fails 1584 (as described elsewhere) and in some examples at the occurrence of said failure the speech recognition engine attempts to determine the cause of the failure and obtain clarification 1584 1585 1581 (such as in some examples by means of voice synthesis 1585 1581, in some examples by other types of prompts 1585 1581) so a user may attempt to respond appropriately 1581 and speech services processing 1583 may re-attempt recognition 1565 of said user's new reply. Alternatively, in some examples optimizations 1594 may (optionally) be performed as described in FIG. 54.

In some examples after speech recognition of a user's instruction(s) succeeds 1583 1584 the recognized instruction(s) is matched with the appropriate device command(s) 1586 1587 which are transmitted to the device (such as locally 1587 in some examples between a device's speech engine component and device processing, such as remotely 1587 in some examples between networked speech services and device processing, and such as a combination 1587 in some examples between networked speech services that provide speech recognition and device processing that matches the remotely recognized instruction[s] with the corresponding device command[s]); and are utilized to perform the user-directed task or instruction 1588. In some examples the result 1589 1590 1581 of the user's verbal instruction is displayed clearly 1589 1590 1581, in some examples the actions are confirmed verbally by synthesized speech 1590 1581, and in some examples the result 1589 1590 1581 is indicated by one or a plurality of other means (as described elsewhere) such that the effect of the user's spoken direction(s) are clearly indicated so the user knows the device has performed the proper and correct action(s) 1590 1581.

In some examples a user may choose to use speech entry of text 1581 when performing contextually appropriate text entry during a task such as in some examples to verbally enter words and numbers in a field 1581, in some examples to verbally enter a text message in a form 1581, in some examples to verbally enter text in a memo 1581 or an e-mail 1581, and in some examples to verbally enter text for another purpose 1581. In some examples speech recognition of text proceeds in the same manner 1581 1582 1583 1584 1585 with any remote networked speech recognition transmitted 1592, and local speech recognition displayed locally, until the text is produced successfully 1586 1592 and the appropriate text entry field or zone is entered 1593 and visible 1594. In some examples the result 1593 1594 of the user's verbal text dictation is displayed clearly 1593 1594 1581, in some examples the actions are confirmed verbally by synthesized speech 1595 1581, and in some examples the result 1593 1594 1595 1581 is indicated by one or a plurality of other means (as described elsewhere) such that the effect of the user's verbal entry of text is clearly indicated so the user knows the device has performed the proper and correct action(s) 1594 1595 1581.

In some examples different speech services 1582 1583 may be employed to provide different types of speech recognition such as in some examples local device speech services 1582 may match user instructions against a controlled vocabulary that is locally stored 1583, and in some examples networked speech services 1582 may provide an alternate speech recognition processing for text entry in which a user's verbal entries are matched against a large vocabulary 1583 whose breadth of speech recognition capabilities may scale to both an entire language and to multiple languages, serving one or a plurality of users 1581 in one or a plurality of locations.

Turning now to FIG. 54, “Speech Recognition Optimizations,” some examples are illustrated of speech interactions that in some examples may be optimized by automated means 1601 and in some examples by manual means 1601 (with various optimizations means described elsewhere in more detail but called out to illustrate some additional optimizations examples). In some examples speech interactions may be optimized by collecting and recording failed attempts 1602; in some examples by categorizing collected and recorded failures into groups 1602 (such as in some examples by content analysis software or system 1602, in some examples by the users' choices of speech or wording 1565 1602, in some examples by their context of use 1564 1602, in some examples by the application and application stage 1564 1602, in some examples by a task such as adding a digital event to an online resource such as to a PlanetCentral or a GoPort 1564 1602 [as described elsewhere], and in some examples by other categorization means 1602); and in some examples by ranking collected and recorded grouped categories of failures 1602 by each category's rate of success and rate of failure.

In some examples optimization 1601 proceeds by identifying failures 1602 then identifying when a subsequent success occurs and collecting and recording said successes 1603; in some examples by associating successes with collected categories of failures 1603 to create parallel categories of recorded successes 1603; in some examples by sub-grouping the successes within each category 1603 (such as in some examples by content analysis software or system 1603, in some examples by the users' choices of instruction wording 1603, and in some examples by other categorization means 1603); and in some examples by ranking collected and recorded group successes 1603 by each's rate of success and rate of failure.

In some examples specific failures 1602 may be associated with specific successes 1603 and the means employed in those successes to interactively turn failures into successes (such as in some examples as part of its speech recognition interface 1559 1581; in some examples as part of interacting with a user by means of speech I/O 1559 1567 1568; in some examples generating and transmitting a correction request 1567 1568, in some examples generating and transmitting example interactions 1567 1568, in some examples displaying a list of example corrections 1567 1568, and in some examples generating and delivering other types of corrective actions or suggestions 1567 1568); and in some examples means that turned failures into successes 1604 may be tested 1604 (such as in some examples by automated means as described elsewhere, and in some examples by manual means).

In some examples the result of certain tests 1604 is a declining rate of user success 1605 (which in some examples may be measured and/or reported as an increased rate of user failure 1605), and said means are discarded rather than utilized to improve user success 1606. In some examples the result of certain tests 1604 is to deliver a higher rate of user success 1605 and said tested means to improve user success may subsequently be delivered to users in some examples as part of a speech recognition interaction system 1606 1558 1580 (such as in some examples when providing a speech recognition interface 1559 1581; in some examples in the steps or process[es] utilized to interact with a user by means of speech I/O 1559 1567 1568; in some examples when generating and transmitting a correction request 1567 1568, in some examples when generating and transmitting example interactions 1567 1568, in some examples when displaying a list of example corrections 1567 1568, and in some examples when generating and delivering other types of prompts, suggestions, corrective actions, etc. 1559 1567 1568 1569); and in some examples as part of an additional system that raises speech recognition success rates (such as AK as part of an AKM which may improve user success as well as provide additional optimizations, as described elsewhere).

Productivity means doing more with fewer resources. Efficiency means producing more with fewer steps and at lower costs. Effectiveness means reaching goals in faster and better ways. Happiness means eliminating problems while spending more time doing what we want. Wealth means earning more and being able to do more while spending less.

Today we live in a blizzard of new and complex networked electronic devices that increasingly require us to figure out and use new combinations of hardware, software, networks, communications, services, data, entertainment, etc. Some of these are illustrated in the subsidiary devices zone 2226 2227 in FIG. 55, “RCTP—Subsidiary Devices (SD) Summary.” In a brief summary of some examples, some of these SD's 2227 include mobile phones 2228, wearable electronic devices 2228, PCs 2229, laptops 2229, netbooks 2229, tablets 2229, electronic pads 2229, video games 2229, servers 2229, digital televisions 2230, set-top boxes 2230, DVR's (digital video recorders) 2230, television rebroadcasters 2230, surveillance cameras 2231, sensors 2231, Web services 2232, and RTPs (Remote Teleportals) 2233. Increasingly, a single task can become multi-faceted if it includes picking up or starting one of these SD's (like a tablet, pad or smart phone); turning it on and connecting it to a network (like the Internet or a mobile phone service); running an application that uses a remote service (like search, an electronic reader, a social media application for a service like Facebook, voice-recognition texting, etc.); then accessing remote and/or local data to perform a task that includes a different remote service (like taking a photograph with the device, cropping it with a picture editor on the device, using a messaging application to write a text message or a social media update, attaching the cropped photo and sending it).

SD's 2227 run different operating systems, use different interfaces, access the Internet over different services, and employ different means for communications and for other digital tasks. Superficially, they seem to be many different types of devices but when factored down they are basically digital means to work with words, pictures, video, music, entertainment, communications and data—they provide many of the same features even though they have different physical appearances, software interface designs, protocols, networks, applications, etc. Factoring their differences shows that they have many similar features that include find, open, display, scroll, select, highlight, link, navigate, use, edit, save, record, play, stop, fast forward, fast reverse, go to start or end, display menu, lookup, contact, connect, communicate, attach, transmit, disconnect, copy, combine, distribute, redistribute, broadcast, charge, bill, make payments, accept payments, etc.

Is it possible to tame this blizzard of overlapping features, devices and their remote services in ways that make us more productive because we can do more with fewer resources? In ways that make us more efficient because we can produce more with fewer steps and at lower costs? In ways that make us more effective because we can reach goals in faster and better ways? In ways that make us happier because we can eliminate the problems from needing to buy, learn and use too many different devices and different complicated interfaces, so that we can spend more time on what we want? In ways that make customers wealthier because we can do more and earn more from what we do, while spending less on unnecessary devices and services? Remote Control Teleportaling (herein RCTP) provides means to turn some types of electronic devices into SD's (subsidiary devices) that can be run in some examples with a common, familiar interface from devices such as LTP's (Local Teleportals) and MTP's (Mobile Teleportals); and in some examples with a remote control interface that resembles each SD's interface; and in some examples with a different remote control interface.

In some examples it is therefore possible to turn a plurality of types of networked electronic devices into SD's that can be run in some examples by RCTP either as an SD's owner, and/or without needing to buy those SD's, their applications, their digital content, or pay for the services to which they subscribe. That latter option may be provided by SD Servers which in some examples are servers, in some examples are services, in some examples are applications, in some examples are provided by third-parties, in some examples are provided by API's, in some examples are provided by modules, in some examples are provided by widgets, and in some examples are provided by other means.

If this is possible it could affect industries 2226 such as devices, applications, content and services, which are larger than just the devices that some vendors sell. In some examples the affected industries include mobile phones 2226, in some examples computers 2226, in some examples tablets 2226, in some examples servers 2226, in some examples televisions 2226, in some examples DVR's 2226, in some examples surveillance 2226, in some examples various types of sensors 2226, and in some examples other types of networked electronic devices and/or devices with networked electronic controllers. The affected industries 2226 could also include the vendors of in some examples device operating systems 2226, in some examples software applications 2226, in some examples office software 2226, in some examples creative applications for creating or editing content 2226, and in some examples modules or services for providing these applications through these devices 2226. The affected industries could also include the vendors of digital content such as in some examples music 2226, in some examples movies 2226, in some examples television shows 2226, in some examples books 2226, in some examples expensive college textbooks 2226, in some examples digital magazines 2226, in some examples news 2226, in some examples other types of digital content 2226. The affected industries could also include some network-based industries 2226 that provide bandwidth such as in some examples mobile phone services 2226, in some examples cable or satellite television services 2226, in some examples other types of specialized connectivity 2226. In addition, it could also affect the remote services industries 2226 that customers use with SD's such as in some examples videoconferencing services 2226, in some examples subscription-only documents 2226 such as journals, in some examples restricted databases 2226 such as purchased by research libraries and available only to authorized patrons, and in some examples other types of remote services 2226. In some examples the affected industries could also include other industries that sell other types of products, equipment, applications, software, content, services and more to owners of SD's.

From an economic history view, it is possible to draw a parallel between RCTP and unbundling compound products. In one example the music industry used to sell single songs for a single song price, but over time managed to evolve the product to selling entire albums for $10 to $16 each—but when digital technology recently re-enabled the selling and buying of single songs, the customer's average music purchase dropped from an album to a song and the industry lost a major portion of its revenues. Similarly, newspapers and magazines never wanted to sell individual articles for pennies so packaged their products to selling a whole magazine or a whole newspaper with multiple editorial components, and even further evolved the product packaging to locking customers into subscribing to multiple issues—yet again, when digital technology enabled clicking to only the individual article that a customer wants to read, instead of a buying a whole publication customers stopped subscribing, and many stopped paying for most editorial content. In another example cable TV bundles television into a dual stream of forcing subscribers to buy numerous channels (availability of 500 channels times 24 hours a day of programming) plus charges to advertisers (running ads across 500 channels times 24 hours a day of programming) for access to those subscribers—but digital DVRs and Internet television shows make it possible for customers to view or buy only the few shows they actually watch with almost no advertisements, which has started unbundling cable TV. In some examples RCTP might be viewed as a similar digital unbundling, wherein each customer no longer needs to buy their own entire networked electronic device with its required software, copies of digital content and specialized services, just to receive the functions they occasionally want, but can instead click to just what they need when they need it—which in some examples might simultaneously unbundle a plurality of hardware, software, content, services and other industries.

In some examples RCTP could help simplify the range of SD's—with fewer devices that need to be bought, fewer interfaces that need to be figured out and learned, less content that needs to be bought and owned by each individual, and fewer network services that need to be paid to be used. Potentially, one or a plurality of customers and users could be more productive, more efficient, more effective, happier and wealthier—doing more and receiving more, while spending less. Potentially, this would also be different for the affected industries' 2226 manufacturers and vendors because RCTP access and use of one or a plurality of types of electronic devices might alter the number of device manufacturers, software developers, network services vendors, remote services vendors, and application creators—as well as alter the operations and focus of each industry's leading vendors—because what they sell and how it is used could be more accessible to a wider range of customers, in some examples because each user would no longer need to purchase or personally own as many devices, applications, content and services. As a result, one or a plurality of those devices, vendors or industries might be turned into a more of a service in some examples, a commodity in some examples, a smaller industry in some examples, a large vendor of generic functions in some examples, a successful niche vendor of a superior branded function in some examples, a leader in one or a plurality of categories that has a large customer base through digital access in some examples, or have other material and operating consequences.

In the end, is it possible to turn today's hailstorm of complex electronic devices into “subsidiary devices” (herein SD), and enter a “Post Subsidiary Device Stage” (herein “Post SD Stage”) of electronic device development? When printing and publishing began, it took about 75 years to develop the modern book (from about 1445 to 1520) during which time the printed book evolved from a few expensive copies of hand-rendered calligraphy into its now familiar standard components, order and layouts that became more affordable by a wider range of readers. Might RCTP help advance a similar evolution of digital devices today, wherein some digital devices and functions are rationalized into a smaller number of consistent usage designs and predictable processes within an accessible digital environment that is more affordable for wider use with greater benefits to more people? If so, that would be an Alternate Reality indeed—a Post SD Stage whose evolution is envisioned and described by the ARTPM.

Additionally, in some examples RCTP systems, methods, apparatuses and processes for remote control can be embodied in specific systems that each provide a range of focused benefits; such as in some examples an SD server(s), in some examples a help desk for various types of electronic devices (such as subsidiary devices enumerated elsewhere), in some examples customer support that includes hands-on use of a device or system being supported, in some examples an education or teaching system that utilizes a plurality of SD's under individual remote control or simultaneous remote control, in some examples technical support for complex equipment or complex devices, in some examples for services such as telecommunications, vehicle operations, equipment operations, etc.

RCTP—Subsidiary Devices Summary: Currently, large numbers of people have become buyers and users of electronic devices such as computers 2229, laptops 2229, netbooks 2229, tablets 2229, video games 2229, mobile phones 2228, televisions 2230, television set-top boxes 2230, digital video recorders 2230, network services 2232, Web services 2230, remote services 2230, etc.—not to mention the numerous types of software, digital content and services that run on them, or provide connectivity or content to them. As these have become increasingly ubiquitous and popular users have the growing problem of too many devices and too many expenses for using similar features and performing similar tasks in the many different ways sold by what gradually become competing industries. Here, this RCTP advance provides means for remote control that enables a user to gain remote control over one or a plurality of electronic devices, and thereby turn them into subsidiary devices—perhaps reducing the dependence on any one of those industries, devices, services, applications, etc.

FIG. 55, “RCTP—Subsidiary Devices Summary”: In some examples one or a plurality of devices (with some examples at bottom) that may be controlled by RCTP. In some examples one or a plurality of SD's include similar components (with some examples in the middle). In some examples the data and/or applications required to connect to one or a plurality of SD's may be stored in one or a plurality of means (with some examples illustrated at top), with each record corresponding to a subsidiary device. In some examples one or a plurality of a user's personally owned SD's are accessible by that person; in some examples SD's that may be owned by a plurality of individual owners and/or third-parties are registered with and/or accessible by one or a plurality of SD servers.

FIG. 56, “RCTP—Plurality of Simultaneous Subsidiary Devices”: In some examples a single user with a single Controlling Device (herein CD) may simultaneously access and remotely control a plurality of SD's, such as in some examples a computer, in some examples a cable television set-top box, in some examples a video game, in some examples an RTP, etc. Optionally, in some examples said identity may access and use one or a plurality of SD's by means of an SD server.

FIG. 57, “RCTP—Plurality of Identity(ies) with Subsidiary Device(s)”: In some examples a single user selects an identity and that automatically (and/or manually) retrieves and opens one or a plurality of that identity's SPLS(s), which may include one or a plurality of SD's that may be accessed and remotely controlled directly. Optionally, in some examples said identity may access and use one or a plurality of SD's by means of an SD server. Selecting an SD retrieves the appropriate record(s) and/or application(s) required to access and use the selected SD. In some examples a user may access a plurality of SD's to use them simultaneously.

FIG. 58, “RCTP—Summary Subsidiary Devices Control/Data Process”: In some examples a CD (Controlling Device) is connected to one or a plurality of SD's that have different device profiles, different data formats, and different local storage, to communications for remote control. In some examples a configurable CD receives and utilizes stored device profile data and/or (an optional) control application(s) in some examples from an SD, in some examples from local storage, in some examples from remote storage, and in some examples from another source such as a vendor, a user or others. In some examples said device drove file and/or control applications are utilized, in some examples with RCTP processing, to access and control one or a plurality of SD's by receiving data from each SD and sending commands to each SD in some examples by one or a plurality of networks.

FIG. 59, “RCTP—Subsidiary Devices Protocols”: In some examples a protocol employed in communications and/or control between a CD and an SD may be retrieved in some examples from local storage, in some examples from remote storage, and in some examples from another source. In some examples a protocol is not retrievable and in some examples one or a plurality of parts of the required protocol may be generated; if generated successfully, in some examples said generated protocol may be saved for future use by one or a plurality of future users. In some examples a retrieved and or generated protocol is utilized to establish and maintain communications and/or control between a CD and an SD.

FIG. 60, “RCTP—Control and Viewer Application(s)”: In some examples control applications and/or viewer applications are run by a CD (Controlling Device). In some examples control applications and/or viewer applications are run by an SD. In some examples control applications and/or viewer applications are run in some examples a server(s), in some examples by a third-party service(s), in some examples by a another means for external control of one or a plurality of SD's. In some examples control applications and/or viewer applications are downloaded from and/or run by an SD server. In some examples control applications and/or viewer applications can be requested and downloaded from a plurality of sources. In some examples after being requested and downloaded control applications and/or viewer applications can be stored for faster future retrieval and use.

FIG. 61, “RCTP—Initiate Control and Viewer Application(s)”: In some examples a user utilizes a CD and selects an SD for remote control which may (optionally and if needed) request and retrieve the device profile from one of a plurality of sources; and in some examples said SD selection me (optionally and if needed) request and retrieve the required control application and/or viewer application from one of a plurality of sources, and execute said application(s). In some examples said device profile and application(s) may be auto-retrieved from one of a plurality of sources; and in some examples said device profile and application(s) may be manually retrieved from one of a plurality of sources. In some examples a remote control interface may be generated under program control such as when a uniform remote control interface is desirable; and in some examples said generated remote control interface can include a subset of factored standard commands based on each SD's device profile. In some examples and SD needs a control application and/or viewer application and does not have that stored locally, in which case means are provided for a CD to retrieve the application(s), download it to the SD and execute it.

FIG. 62, “RCTP—Control Subsidiary Device”: In some examples a CD selects an SD and sends a connection control request to said SD; and in some examples a CD utilizes an SD server to select said SD. In some examples said selection is followed by the automated or manual retrieval and execution of the appropriate device profile, control application and/or viewer application for remote control. In some examples said application(s) is used to send a connection control request to said SD by means of the appropriate protocol. In some examples a CD sends and an SD receives a connection control request, and (optionally the CD, SD and/or identity may be authenticated and/or authorized. In some examples the CD connects to the SD using in some examples a known communications protocol and in some examples a known control protocol, and in some examples a generated protocol is used (as described elsewhere). In some examples after a control connection is established between devices a control session includes in some examples running a control application and/or viewer application; in some examples displaying at the CD a control interface which displays available remote control options and may be employed to enter one or a plurality of remote control instructions. In some examples translation is not required so the selected control instruction may be transmitted to the SD which receives the command and executes it; in some examples the SD transmits updated SD state information, condition or data to the CD; in some examples translation is not required so the received SD data is displayed by the control interface at the CD. In some examples translation is required for remote control instructions issued and transmitted by a CD (which is described elsewhere) to be received and utilized by an SD; and in some examples translation is required for updated SD state, condition or data that is transmitted to a CD (which is described elsewhere) to be received and displayed by a CD in an updated control interface. In some examples one or a plurality of SD instructions and uses may be logged such as during some paid uses of an SD.

FIG. 63, “RCTP—Translate Inputs to SD and Outputs from SD”: In some examples a networked SD capable of control can be managed and controlled by a CD even if said CD requires translation in one or both directions (in some examples when transmitting instructions or commands, and in some examples when receiving updated SD state, condition or data after it executes said instructions or commands. In some examples a CDs instructions are translated into an SD's commands or protocol. In some examples the output from the new SD state, as the condition, SD data, etc. is translated into SD data that is compatible with the CD's remote control. In some examples said translation(s) can be performed in one or a plurality of apparatuses, applications or services; in some examples said translation utilizes an industry-standard protocol; in some examples said translation utilizes a proprietary protocol; in some examples said translation utilizes a generated protocol (as described elsewhere); and in some examples said translation is accomplished with a custom integration between the devices that may in some examples utilize a subset of device commands, and in some examples provide translation by other known means (as described elsewhere).

Turning now to FIG. 55, “RCTP—Subsidiary Devices Summary,” some examples of layers in an RCTP architecture are illustrated. In some examples an affected industries electronic devices layer 2226 includes a range of electronic subsidiary devices 2227 as described elsewhere. In some examples a subsidiary device's components layer 2212 includes the components of a wired and/or wireless electronic device 2213, which in some examples includes a CPU 2219 coupled to a wired network interface 2223 for communicating with a network such as a LAN 2224 and a Controlling Device (herein CD) such as an LTP or an MTP; in some examples includes a CPU 2219 coupled to a wireless network interface 2223 and an optional antenna 2221 for communicating with a wireless network or directly with a device remote control such as WiFi 2222, Bluetooth 2222, IR (line-of-sight infrared) 2222, cellular radio 2222, etc. and thereby with a CD such as an LTP or an MTP; in some examples includes a CPU 2219 coupled to memory 2214 which may also load and run an optional control application 2215 or an optional viewer application 2215 (as described elsewhere); in some examples includes a CPU 2219 coupled to (optional) video processing 2216, audio processing 2216, graphics processing 2216, television tuner processing 2216, or other media processing 2216; in some examples includes a CPU 2219 coupled to (optional) storage 2217 that may store data or applications utilize in Remote Control Teleportaling such as in some examples a stored control application 2217, and some examples a stored viewer application 2217, in some examples a stored device profile 2217, in some examples a stored device interface 2217, in some examples one or a plurality of communications protocols 2217; in some examples includes a CPU 2219 coupled to an (optional) display 2218, which in some examples may be a touch screen display 2218, in some examples may be an LCD display 2218, and in some examples may be another type of visual display 2218; in some examples includes a CPU 2219 coupled to one or a plurality of user interfaces 2220 such as in some examples a keypad 2220, in some examples a keyboard 2220, in some examples a pointing device 2220, in some examples a control panel 2220, in some examples buttons 2220, in some examples dials 2220, in some examples a voice command interface 2220, in some examples other types of user interface controls 2220 as described elsewhere.

In some examples said electronic subsidiary device(s) 2227 wireless 2222 or wired 2223 interconnections may be directly with a CD such as an LTP or an MTP; in some examples said wireless 2222 or wired 2223 interconnections may be with a CD such as an LTP or an MTP over one or a plurality of networks; in some examples said wireless 2222 or wired 2223 interconnections may be with one or a plurality of SD server(s) over one or a plurality of networks, and said SD server(s) provide interconnections with a CD such as an LTP or an MTP. Alternatively, a CD (the controlling device) may be a different type of SD (subsidiary device) such as in various examples a mobile phone 2228, a wearable electronic device 2228, a PC 2229, a laptop 2229, a netbook 2229, a digital tablet 2229, an electronic pad 2229, a video game 2229, a server 2229, a digital television 2230, a set-top box 2230, a DVR (digital video recorder) 2230, a television rebroadcaster 2230, a Web service 2232, a remote service 2232, etc.

In some examples an individual's subsidiary devices (layer 2201) includes one or a plurality of records 2202 that may be contained in one or a plurality of databases, with each record containing data that corresponds to an identity's device 2203 2204 2205 2206 2207 2208 2209 2210 or with each record containing data that corresponds to a device associated with an SPLS 2203 2204 2205 2206 2207 2208 2209 2210. In some examples said records 2202 are stored by an identity's CD; in some examples said records 2202 are stored remotely but accessible by said identity's CD; and in some examples said records 2202 are associated with one or a plurality of SD server(s). Collectively, said records contain data that corresponds the subsidiary devices 2202 associated with an individual 2201.

For ease of illustration, only a portion of the database 2202 is illustrated relating to a components layer 2212 2213 and an affected industries electronic devices layer 2226 2227; though said database 2202 may contain other subsidiary device data utilized in providing access to, and control of, specific SD's. As shown in said SD layer 2201, an individual's SD's 2202 and/or a server's SD's 2202 includes one or more records, each associated with an SD. In some examples each record contains data corresponding to an SD such as in some examples an identity name field 2203 contains the name of one of an individual's identities (as described elsewhere; such as John Smith); in some examples an SPLS name field 2203 contains the name of one of an individual's SPLS's (as described elsewhere; such as family, coworkers, members of team X, etc.); in some examples an identity/SPLS name field 2203 contains the combined name of one of an individual's identities combined with the name of one of said individual's SPLS's (as described elsewhere; such as John Smith/family); in some examples a device name field 2204 contains a user's name for a specific device 2204 (such as laptop, mobile phone, etc.); in some examples an icon field 2204 contains an icon or symbol that represents said device graphically (wherein said icon or symbol may be provided by a vendor, based on a vendor's logo, selected by a user to fit a personal preference, etc.); in some examples a device's vendor field 2205 contains a device's vendor's name (such as Apple, HP, Samsung, etc.); in some examples a device's model name field 2205 contains a device's model name (such as iPhone4, G62m laptop, 6500 TV, etc.); in some examples a vendor/device model name field 2205 contains the combined name of a vendor combined with a device's model name (such as Apple/iPhone4, HP/G62m laptop, Samsung/6500 TV, etc.); in some examples a device's communications protocol(s) field 2206 contains the names of the device's communications protocol(s) (such as RDP, Modbus, UPnP, etc.); in some examples a device's address field 2207 contains the device's address (such as its IP address such as the IPv4 address 170.12.250.4, or an IPv6 address); in some examples a device's interface field 2208 10 device's network interface or it's communications interface (such as Ethernet, LAN, WiFi, line of sight IR, etc.); in some examples a device's control application(s) field 2209 contains the name (including version number) of its control application or the name (including version number) of its viewer application (as described elsewhere), and in some examples contains the device's control application 2209 and/or it's viewer application 2209; in some examples a login requirement field(s) 2210 contains whether login and/or authentication is required and if so data such as a login ID and/or password, or whether said subsidiary device may be accessed without login, authentication or authorization 2210; in some examples other subsidiary device data may be included as needed to provide access to, and control of, a subsidiary device(s).

In some examples each SD record is representative of a single SD device and contains data for selecting said device, accessing said device, and accessing and running the appropriate control and/or viewer application(s) to control said device (which will be discussed in connection with subsequent figures). The fields in said record may contain the actual items (such as in some examples icons or symbols, in some examples control or viewer applications, etc.) or alternatively maybe pointers to locations in storage or memory (whether local or remote) where the relevant data may be found and retrieved.

RCTP—Plurality of Simultaneous Subsidiary Devices: The control of subsidiary devices (SD's) is a departure from today's practice of requiring each person to own a plethora of different types of electronic devices in order to access and use their different features, functions and capabilities. The combination of TP devices and SD's has the potential to assist in converging different types of digital electronics into a single model—a digital environment (as described elsewhere)—which in some examples includes direct use of a spectrum of different digital devices' features and capabilities from one or a plurality of TP devices. Turning now to FIG. 56: “RCTP—Simultaneous Plurality of Subsidiary Devices,” a user 2240 who employs a TP device 2241 has continuous access to visible indications of the availability of a plurality of SD's 2242 2248, which in some examples provides access to that user's owned SD's 2250, and in some examples provides access to additional remote SD's 2251 such as through an (optional) SD server(s) that may be accessed, controlled and used on demand—together providing means to quickly identify and employ the features, functions and capabilities of a wide range of subsidiary devices without necessarily needing to own and/or physically use them locally. Instead, a range of digital electronic devices, tools, services, applications, etc.—together an emerging plurality of digital capabilities that exists with and alongside one's owned electronic devices—may be used and run from one or a plurality of controlling devices 2241.

In some examples a user 2240 employs a Controlling Device (herein a CD) which may be an LTP 2241 which includes a display, means for user interaction, a CPU, memory, storage, communications, and software (as described elsewhere). In some examples a user may employ visually simple and clear means 2242 2248 on said CD to select an icon, name, label, menu choice, graphical object or other clear and direct representation of an available SD (subsidiary device) 2227 2213 2202 from the display of a CD 2241. In some examples rather than displaying said SD's on CD 2241, a list of SD's or a graphical representation of available SD's may be transmitted for display and selection on a remote control held by a user 2240 (such as described elsewhere such as in some examples a URC [Universal Remote Control] described in part in FIG. 36 and FIG. 37). In some examples said user 2240 employs an electronic device to access one or a plurality of SD servers 2251 which include databases that, among other things, associate user requests for SD's with currently available and accessible SD's (as described elsewhere); and said SD server(s) 2251 provide a list of SD's or a graphical representation of available SD's that is transmitted for display and selection; with that user's selection of one or a plurality of SD's transmitted to the SD server 2251. After a user selects one or a plurality of SD's, said selection(s) is communicated to CD processing 2250 which retrieves the selected SD's record 2202 either locally or remotely, including said record's data and address 2203 2204 2205 2206 2207 2208 2209 2210, and initiates CD processing 2250, SD access and SD control (which are described in more detail elsewhere).

In some examples a single user 2240 with a single CD 2241 may be used to simultaneously access and control a plurality of SD's 2252 2253 2254 2255, including accessing and controlling other TP devices 2255 by RCTP means as if they were SD's. Providing means for a single user 2240 to access, view and control multiple SD's provides a greater span of control for a single user, such as to provide seamless navigation and control over multiple simultaneous activities, tasks, resources, tools, devices, etc. in multiple locations. In some examples this is accomplished by means of a TP device 2241 (such as described in more detail elsewhere) which in some examples includes an intuitive user interface and supervisory/management processing that provides interactions and control with one or a plurality of SD devices.

As illustrated in FIG. 56 in some examples a user 2240 utilizes an LTP 2241 to receive and display 2242 2248 indications of available identities and SPLS's (which include IPTR as described elsewhere—Identities, Places, Tools and Resources—which include SD devices; and which may also list SD's independently of a user's identities and SPLS's); in some examples selecting one or a plurality of SD's from said displayed indications 2252 2253 2254 2255; processing each said selection to obtain access and control of each selected SD; administering (optional) user authorization and authentication to be permitted control over each SD; displaying on the user's CD “windowed” means to control and view the output from each SD device (as described elsewhere) such as a PC laptop 2243 2253, a set top box with a DVR 2244 2252, a video game system 2246 2254, and an RTP digital reality (as described elsewhere) running on a remote RTP 2247 2255; entering an instruction on the CD for one of the SD's; if needed, translating the instruction into a device-specific command; relaying to the SD the instruction or device-specific command; receiving and performing the instruction by the SD; transmitting the SD's output to the CD; and receiving and displaying each SD's output on the CD's display 2243 2244 2246 2247.

In some examples a CD apparatus and system 2241 allows for simultaneous control of one or a plurality of SD's that are connected to said CD. Each SD is separately viewed in an “SD window” 2243 2244 2246 2247 wherein each SD's window contains the processed video signal(s) from that one separate SD, and each window may be moved and/or resized as desired. In some examples a CD, such as a TP device 2241, has substantial capacity for multiple simultaneous operations (as described elsewhere in more detail) that in some examples includes simultaneously controlling a plurality of subsidiary devices; while in some examples a CD may have less capacity (such as in some examples where a CD is a netbook, an electronic tablet, a mobile phone, or other electronic device that includes a display, means for user interaction, CPU, memory, storage, communications, and appropriate application software). In each example the number of SD's that may be controlled directly and simultaneously may vary based on each CD's capacity such that some CDs may provide simultaneous control of a larger number of SD's than other CD's can provide. Alternatively, in some examples a smaller CD such as an AID/AOD (such as a mobile phone running a VTP) may control a larger capacity TP device like an LTP, and utilize the larger LTP device's capacities to control more SD's simultaneously, wherein the LTP communicates all the SD windows, controls and outputs within one focused connection to the AID/AOD.

In some examples control over each SD is managed by processing signals from the CD device's 2241 user interface(s) (as described elsewhere, including both direct interfaces such as a pointing device, keyboard, voice, and other means, and also including a URC [Universal Remote Control]). In some examples the focus of a user interface passes from one SD window to another 2243 2244 2246 2247, such as by using a pointing device's pointer to point at a PC laptop's window 2243 and thereby highlight it and make it the focus for instructions, and then moving said pointer over a set top box's window 2244 and thereby highlighting said second window and make said second SD window the focus for instructions, and subsequently point at any desired SD device's window which both highlights it and makes that SD the focus for commands and instructions. As said user interface is employed to move the focus from one SD window to another, CD processing automatically generates the necessary user interface signals to interact with each highlighted and focused SD. In some examples to control a particular SD 2243 2244 2246 2247, a user 2240 moves the user interface pointer to highlight that particular SD's window. Then, to control a different SD the user 2240 highlights the desired SD's window. If the user does not want active control of one or a plurality of SD's, the user may focus the user interface off of any one or all of the SD devices.

In some examples an SD device continues performing the last instruction received even when active control is moved away from it, such as in some examples a PC laptop 2243 2253 continues to run the previous software applications that were started (such as in some examples a web browser with multiple tabs open, word processing a document, receiving and replying to e-mail, etc.); in some examples a set top box with a DVR 2244 2252 continues to play a recorded movie or a currently broadcast television show; in some examples a video game 2246 2254 continues running a game; in some examples an RTP 2247 2255 continues to display a real remote place and the specific digital reality applied to it; etc. In some examples an SD's continuing operation(s) may be changed by using a user interface to highlight that SD window and make that SD the focus, then use the SD window interface to issue a new instruction(s) or command(s).

In some examples each SD's audio is managed by the CD 2241 processing the audio from each source 2243 2244 2245 2246 2247 separately and providing automatic and manual audio control over which audio is played, which audio is muted, and the volume of each SD source that is played. As with the video signals, in some examples audio signals are transmitted from each SD 2252 2253 2254 2255 to the CD 2241 for processing and output. In some examples the audio from each SD is sent from their respective outputs to an audio controller and processor within the CD. Said audio controller and processor controls an audio mixer that is connected to the CD's audio amplifier(s) and speaker(s). In some examples the simultaneously received SD audio signals are mixed and controlled so that they match the current preferences of a user 2240, with some user preferences automated and some user preferences manually controlled. In some examples the audio is automated so that only a highlighted window plays audio, so that focusing the user interface on a specific SD window plays its audio; in this example moving the focus to the video game window 2246 plays its audio and mutes other audio sources, while then moving the focus to the set-top box 2244 turns on its broadcasted audio while muting the other sources. In some examples the audio from all sources is mixed and manually controlled so that all audio sources 2243 2244 2245 2246 2247 are available with each SD's volume under user 2240 control; in this example a user could listen to a set top box broadcast show 2244 at a normal full volume while playing a video game 2246 softly and muting other sources. In some examples the audio is mixed and played with a combination of automated and manual controls so the combination matches a user's preferences with as little manual adjustment as possible; in this example a user could set all focused connections 2245 with others to automatically and always be set at full normal volume, while adjusting other sources manually 2243 2244 2246 2247 as desired at any given time.

In some examples a CD can utilize remote control means (as described elsewhere) to select between the plurality of simultaneously controlled SD's the one SD that the user wants to control remotely at a given moment. In some examples a user can select between the plurality of simultaneously controlled SD's the two or a plurality of SD's that the user wants to control remotely at a given moment. In some examples a user can select two or a plurality of remotely controllable SD's to perform a single remote control instruction that corresponds to said plurality of selected SD's; such as in some examples to open two or a plurality of SD's simultaneously, in some examples to end the remote control session with two or a plurality of SD's simultaneously, in some examples to start the recording function of two or a plurality of SD's by entering a single remote control instruction; and in some other examples to perform a different but commonly available remote control feature or function with two or a plurality of SD's simultaneously.

As illustrated in the examples in FIG. 56, said CD user 2240 has a focused real-time connection (as described elsewhere) with another identity (user) 2245. Said CD user 2240 may share the output from one or a plurality of SD's 2243 2244 2246 2247 with the other identity 2245. In some examples the other identity 2245 may be passed remote control over one or a plurality of remotely controlled SD's 2243 2244 2246 2247. Alternatively, in some examples said CD 2241 may be used to broadcast (as described elsewhere) the output from one or a plurality of SD's 2243 2244 2246 2247 to one or a plurality of recipients. Alternatively, in some examples said CD 2241 may utilize one or a plurality of SD servers 2251 to obtain remote control over one or a plurality of SD's 2227 2213 2202, and said CD 2241 may be used to broadcast (as described elsewhere) the output from one or a plurality of SD's 2243 2244 2246 2247 to one or a plurality of recipients. In some examples RCTP enables a digital environment with far more productive and widespread uses of a limited number of SD's by a larger number of users and recipients of their output. In some examples an RCTP system and apparatus may be described as turning unitary and generally solitary electronic devices into virtualized resources that may be accessed and employed by a plurality of users and audiences.

Plurality of Identity(ies) with Subsidiary Device(s): As described elsewhere in some examples TP devices enable a consistent system wherein subsidiary devices (SD's) and the applications, services, features, functions, and capabilities they provide are logically and automatically available for connection and use—in other words, selecting available SD's may be automated and direct. While it may be imagined that it is complicated to select and use one or a plurality of identities, and then select one or a plurality of subsidiary devices, the use of a TP device 2241 may include in some examples the identification of a user 2240, in some examples the identification of one or a plurality of said user's identity(ies) 2240 2242 2248 (as described elsewhere), or in some examples the selection of one or a plurality of one of said user's identities' SPLS(s) 2240 2242 2248 (as described elsewhere). In each example the selection of a user, identity, and/or SPLS automatically retrieves and displays the appropriate continuous visible indications of the appropriate SD's 2242 2248 that may be used. This is automated so there is reduced need to search and figure out the available SD's, such as for example even a basic user being presented with SD choices so they can perform immediately at advanced levels.

FIG. 57, “Plurality of Identity(ies) with Subsidiary Device(s),” illustrates some examples in which a user selects an identity 2260 (as described elsewhere), and some examples in which a user selects an SPLS (as described elsewhere). Said user's selection of identity(ies) 2260 and/or SPLS(s) 2260 causes retrieval 2261 2262 and display 2263 of a subsidiary device list 2261 from information stored in one or a plurality of user profile databases 2262. In some examples said subsidiary device list 2261 is based on an identity's profile 2262, while in some examples said subsidiary device list 2261 is based on an identity's selected SPLS(s) 2262. Following said retrieval 2261 2262, the appropriate subsidiary device(s) list 2263 is presented to the user 2263 as described elsewhere (such as in some examples 2242 2248 in FIG. 56). In some examples said indications of available subsidiary device(s) 2263 2242 2248 may be retrieved from an optional SD server 2264 (as described elsewhere in more detail) to provide access to subsidiary devices from multiple remote sources.

In some examples a user selects a SD 2265 from the presentation of available SD's 2263, local and/or remote records are accessed that in some examples include a database with records and resources for each type of SD, in some examples with records for each individual SD, in some examples the actual individual SD's, and in some examples other sources. Based on each device's record in some examples, or device's response in some examples, the appropriate data on that device is retrieved 2266 which in some examples includes a device profile 2266, in some examples includes a device interface (herein “DI”) 2266, in some examples includes a control application 2266, and in some examples includes a viewer application 2266. In some examples said retrieval(s) for a selected device 2265 may have been performed previously 2266 and may have been stored locally for faster retrieval in the future. In some examples said retrieval(s) for a selected device 2265 may not have been performed previously and therefore retrieval from remote storage 2266 is required. In some examples one or a plurality of said retrieval(s) for that device 2265 may have been performed previously but not stored locally, and therefore retrieval from remote storage 2266 is required. In some examples the availability of an owned SD 2261 2262 triggers said retrievals for all owned SD's 2266 so the appropriate device profile 2266, DI 2266, control application(s) 2266, and viewer application(s) 2266 our stored locally for faster owner access to all owned SD's in the future. In some examples after running appropriate RCTP components (as described elsewhere) for an identity's known SD 2261 2262 or for an SPLS's known SD 2261 2262, the SD is used 2270.

In some examples a user manually selects a device 2265 2267 that is not among the available SD's presented 2263, the appropriate data on that device is retrieved 2267 2266, which in some examples is a device profile 2266, in some examples is a DI 2266, in some examples is a control application 2266, and in some examples is a viewer application 2266. Since said manual selection has not been performed before, said retrieval(s) for that manually added device have not been performed previously and therefore retrieval from remote storage 2266 is required. When these retrieval(s) 2267 2266 are performed, said retrieved data may be stored locally for faster retrieval in the future (based on the assumption that an SD that is used once is more likely to be used again). In some examples the manual selection of a device 2265 2267 triggers the automatic addition of said device in some examples to the currently opened user 2268 2242 2248, in some examples to the currently open identity(ies) 2268 2242 2248, and in some examples to the currently open SPLS(s) 2268 2242 2248—in all examples to update the available SD's presented 2263. In some examples after running appropriate RCTP components (as described elsewhere) for a manually selected SD 2267 2266, said SD is used 2270.

In some examples indications of available subsidiary devices 2263 2242 2248 have been retrieved from an optional SD server 2264 (as described elsewhere in more detail) such as to provide access to other types of subsidiary devices or their applications, content, services, broadcasts, functions, features, capabilities, etc. that a user does not own. When a user selects a device 2263 from an optional SD server 2264, the appropriate data on that device is retrieved 2267 2266, which in some examples is a device profile 2266, in some examples is a DI 2266, in some examples is a control application 2266, and in some examples is a viewer application 2266. Since said SD has not been used before, said retrieval(s) for that added device have not been performed previously and therefore retrieval from remote storage 2266 is required. When these retrieval(s) 2267 2266 are performed, said retrieved data may be stored locally for faster retrieval in the future (based on the assumption that an SD that is used once is more likely to be used again). In some examples the selection of an SD from an SD server 2263 2264 triggers the automatic addition of said device in some examples to the currently opened user 2268 2242 2248, in some examples to the currently open identity(ies) 2268 2242 2248, and in some examples to the currently open SPLS(s) 2268 2242 2248—in all examples to update the available SD's presented 2263. In some examples after running appropriate RCTP components (as described elsewhere) for a SD selected from an SD server(s) 2263 2264 2267 2266, said SD is used 2270.

In some examples a user may choose to employ more than one SD 2263 by taking control of another SD 2269, or by changing from one SD 2270 to another SD 2269. In this case, in some examples another SD is selected by means described elsewhere 2263 such as in some examples by visible indications of known SD's 2261 2262 2263 2265 2266 2270, in some examples by manually selecting an SD 2265 2267 2266 2270, and in some examples by selecting an SD from an SD server 2264 2263 2265 2267 2266 2270. In some examples a user may choose to change one or a plurality of identities 2271 2272 while using the same SD(s) 2270 by changing the currently logged in identity(ies) 2271 2272, or by adding one or a plurality of identity(ies) 2271 2272. In this case, in some examples a different identity is selected, or one or a plurality of additional identities are added (by means described elsewhere) and this results in the use of the same SD 2270 by the new identity(ies). In some examples the previously described automation is immediately performed with the addition of each new identity 2271 2272—such as in some examples retrieving the appropriate SD's associated with each identity 2261 2262 2264, in some examples presenting visible indications of that identity's available SD's 2263, and then automating the connection and running of each SD selected 2265 2266 2267 2268 2270 based upon each selection of an SD 2265.

In some examples a user may choose to change one or a plurality of SPLS(s) 2271 2272 while using the same SD(s) 2270 by changing the currently logged in SPLS(s) 2271 2272, or by adding one or a plurality of SPLS(s) 2271 2272. In this case, in some examples a different SPLS is selected, or one or a plurality of additional SPLS(s) are added (by means described elsewhere) and this results in the use of the same SD 2270 by the new SPLS(s). In some examples the previously described automation is immediately performed with the addition of each new SPLS 2271 2272—such as in some examples retrieving the appropriate SD's associated with each SPLS 2261 2262 2264, in some examples presenting visible indications of available SD's 2263 in that SPLS, and then automating the connection and running of each SD selected 2265 2266 2267 2268 2270 based upon each selection of an SD 2265.

In these and other examples one or a plurality of identities, or one or a plurality of SPLS's, are enabled to use one or a plurality of SD's. Rather than requiring a user to remember, choose and control multiple steps during each addition of each SD, any current SD device state is maintained unless it is terminated, and the process of adding one or a plurality of SD's in some examples by one or a plurality of additional identities, and in some examples by one or a plurality of additional SPLS's, is automated so that it is simplified.

Subsidiary Devices Control Process (SDCP): FIG. 58, “RCTP—Subsidiary Devices Control Process (SDCP),” illustrates some examples for connecting a CD (controlling device) 2277 to one or a plurality of SD's (subsidiary devices) 2290 2292 2294 that have different device profiles 2291 2293 2295 2296, different data formats 2290 2292 2294, and different local storage 2290 2292 2294, to communications for remote control. In some examples of a SDCP, SD's include components such as described in FIGS. 55 and 2290 2292 2294, and may optionally store data in predetermined locations and predetermined format 2290 2294, with locally stored device profile data 2291 2295 and/or remotely stored device profile data 2293 2296 that relates to each SD; some examples of a SDCP include a configurable CD that may perform remote control of said SD(s) such as an LTP 2277 or an MTP 2277, which receives and utilizes stored device profile data 2291 2293 2295 2296 to receive data from said SD and to send control commands to said SD; some examples of a SDCP include a configurable data translator that responds to the device profile data 2291 2293 2295 2296 by receiving data from said SD and transforming it so that it may be incorporated into a control interface (as described elsewhere), and transforming control commands to said SD's data format (as described elsewhere); some examples of a SDCP include remote control communications that connect one or a plurality of CDs 2277 with one or a plurality of SD's 2290 2292 2294; some examples of a SDCP include access to one or a plurality of remote sources for retrieval of SD profiles 2266 in some examples, SD device interfaces (herein “DI”) 2266 in some examples, control applications 2266 in some examples, and viewer applications 2266 in some examples.

In some examples the remote control communications is selected to provide any subset of in some examples direct remote control communications between a CD 2277 and one or a plurality of SD's 2290 2292 2294 by wired, wireless, Bluetooth, IR, or other communication means such that control commands are sent 2297 from a CD to an SD, and SD data is sent 2298 by a SD to a CD; in some examples remote control communications over a local network between a CD 2277 and one or a plurality of SD's 2290 2292 2294 such that control commands are sent 2280 2284 from a CD to an SD via a local network, and SD data is sent 2285 2281 by a SD to a CD via said local network; in some examples remote control communications over one or a plurality of wide area networks between a CD 2277 and one or a plurality of SD's 2290 2292 2294 such that control commands are sent 2282 2286 from a CD to an SD via a local network, and SD data is sent 2287 2283 by a SD to a CD via said local network; in some examples remote control communications via an (optional) SD server 2279 between a CD 2277 and one or a plurality of SD's 2290 2292 2294; in some examples the use of an (optional) SD server 2279 to identify one or a plurality of available SD's 2290 2292 2294, then perform remote control communications over a network between a CD 2277 and one or a plurality of SD's 2290 2292 2294; in some examples a SD extracts and communicates to a CD data representing its operating state and parameters on demand from a CD; in some examples a SD extracts and communicates to a CD data representing its operating state and parameters at programmed periodic intervals; in some examples a SD extracts data representing its operating state and parameters and stores it locally in memory for later communication to a CD; in some examples a CD receives data representing the operating state and parameters of a SD on demand; in some examples a CD receives data representing the operating state and parameters of a SD at programmed periodic intervals; in some examples a CD receives data representing the operating state and parameters of a SD and stores it locally in memory for later use by the CD; in some examples a CD transforms data representing the operating state and parameters of a SD so that it may be incorporated into a control interface (as described elsewhere); in some examples a CD provides a user interface in the form of a graphical window or screen that is used to see the state of a SD and/or select control instructions to be performed by a SD; in some examples a CD provides a user interface in the form of text options that are used to see the state of a SD and/or select control instructions to be performed by a SD; in some examples a CD provides a user interface in the form of one or a plurality of indicators, menus or choices that are used to see the state of a SD and/or select control instructions to be performed by a SD; in some examples a CD provides a user interface in another form of visual user interface that is used to see the state of a SD and/or select control instructions to be performed by a SD; in some examples a CD transforms control instructions into a SD's control commands in the SD's data format (as described elsewhere); in some examples a CD communicates control instructions to a SD where they are performed by the SD; in some examples a CD communicates transformed control commands to a SD where they are performed by the SD.

SDCP Summary: In some examples the SDCP described herein provide one or a plurality of CDs (controlling devices) the ability to adapt to one or a plurality of SD's (subsidiary devices). Said adaptation in some examples is based upon an industry standard; in some examples said adaptation is based on an industry standard that a device vendor has followed in part and altered in part; and in some examples said adaptation is not based on a uniform or industry standard because a device vendor has not utilized one. In some examples this adaptation customizes and configures varying parts of said CD's software, processing, communications, protocols, data transformation(s), etc. while enabling it to use a consistent hardware platform. Said SDCP adaptation is expressed in the form of a device profile file. In some examples a CD's hardware and communications software may be adapted to fit a variety of different manufacturers, components, networks, protocols, etc. such as a subset of a CD 2277, communication network(s) 2276 2278, SD's 2290 2292 2294, and in some examples an (optional) SD server(s) 2279, and in some examples a remote source of device profiles 2266, in some examples a remote source of DI's 2266, in some examples a remote source of control applications 2266, and in some examples a remote source of viewer applications 2266.

Device profile: In some examples adaptations accommodate the differences based on instructions provided in the device profile of each SD 2291 2293 2295 2296, where the device profile's structure and definition encapsulates the variability of each SD. In some examples the device profile file addresses variability such as in some examples the communications physical interface; in some examples serial communication port settings; in some examples serial communication protocol; in some examples network communication port settings; in some examples network communication protocol; in some examples data locations (such as in some examples a register address, in some examples addresses, in some examples storage location[s]); in some examples data attributes (such as in some examples how data is represented such as by types [integer, floating-point, Boolean, etc.], conditional based on a parameter, min/max scaling, alarm conditions, alarm levels, or any processing that produces meaning [such as status codes, alarm codes, transforms, etc.]); in some examples operating states; in some examples parameters (such as in some examples how the data should be accessed, in some examples a method for retaining data in memory, in some examples the frequency of data access, etc.); in some examples device instructions or commands; in some examples instructions transformation specification, or commands transformation specification (as described elsewhere); in some examples device interface screens; in some examples user interface screens. In some examples a device profile utilizes and follows an industry standard; in some examples a device profile utilizes part but not all of an industry standard; and in some examples a device profile is independent of industry standards. In some examples the device profile is altered by addition; in some examples the device profile is altered by subtraction; in some examples the device profile is altered by extension; and in some examples the device profile is altered as additional subsidiary device variability is developed and added. In some examples a device profile allows adaptive representation of SD data, so a CD can adapt to the different and varying ways that each manufacturer and vendor represents the data within each device.

In some examples a CD requests and receives data collected from a SD; and in some examples a CD receives data transmitted by a SD. In some examples said received data is transformed based on values defined in a device profile (as described elsewhere), and placed and stored in a data table based on values defined in a device profile, for remote control use by a CD. In some examples said remote control instructions are transformed into device control commands (as described elsewhere) for transmission to an SD. As a result in some examples a device profile provides adaptability to the variability of a given SD from a given manufacturer or vendor.

Sources: In some examples a device profile 2291 2293 2295 2296 is defined and provided by a device's vendor 2290 2292 2294 2297; in some examples a device profile 2291 2293 2295 2296 is defined and provided by a third-party developer 2297; in some examples a device profile 2291 2293 2295 2296 is defined and provided by a device user 2297; in some examples a device profile 2291 2293 2295 2296 is defined and provided by others such as an open-source contributor 2297 or an SD access service 2279. In some examples a control application 2296 2277 is defined and provided by a device's vendor 2290 2292 2294 2298; in some examples a control application 2296 2277 is defined and provided by a third-party developer 2298; in some examples a control application 2296 2277 is defined and provided by a device user 2298; in some examples a control application 2296 2277 is defined and provided by others such as an open-source contributor 2298 or an SD access service 2279. In some examples a viewer application 2296 2277 is defined and provided by a device's vendor 2290 2292 2294 2298; in some examples a viewer application 2296 2277 is defined and provided by a third-party developer 2298; in some examples a viewer application 2296 2277 is defined and provided by a device user 2298; in some examples a viewer application 2296 2277 is defined and provided by others such as an open-source contributor 2298 or an SD access service 2279.

Application: In some examples a device profile is installed in a device by its vendor at the time of manufacture and remains unchanged unless that individual device is reconfigured or updated; in some examples a device profile is interpreted and placed in a device by command or instruction, and the resulting remote control operation of said device is configured by the specific device profile used, in which case one or a plurality of devices are updated as soon as the device profile utilized is updated; in some examples after a device is configured by a device profile (whether the device profile is installed at manufacture or placed in a device by command or instruction) additional changes may be made to the configuration of said device by transmitting it to the device and installing it by command or instruction.

Subsidiary Devices Protocols: Turning now to FIG. 59, “RCTP—Subsidiary Devices Protocols,” some examples illustrate the retrieval or generation of an appropriate protocol(s) for communications and/or control between a CD (controlling device) and an SD (subsidiary device) over a communication network, or in some examples by direct communications between a CD and an SD. In some examples a CD is capable of controlling an SD as described elsewhere using a control protocol(s) and/or a communications protocol(s) that in some examples is a standard that is already developed (such as in some examples RDP [Remote Desktop Protocol], in some examples UPnP [Universal Plug and Play and its DCP, or Device Control Protocol], in some examples Modbus, in some examples DLNA [Digital Living Network Alliance], in some examples WiFi, in some examples 802.11b/g/n, in some examples HTTP, in some examples Ethernet, or in some examples another known protocol); in some examples a protocol that is developed in the future; and in some examples a protocol that is generated as needed by known means then stored for future re-use. In some examples one or a plurality of known and/or generated protocols are stored locally and/or remotely such as in some examples in local memory, and in some examples on a server. In some examples said stored known protocols can be modified such as by addition, deletion, updating, replacing, or editing.

In some examples a CD is utilized to present a list of SD's (as described elsewhere) and when one SD is selected its device profile is retrieved (as described elsewhere). Said device profile identifies said selected SD 2304 and that SD's protocol(s) 2304, providing data so the CD can determine the type of SD being controlled remotely 2304, and the protocol(s) required in some examples for communications 2304 and in some examples for control 2304. In some examples said CD uses the identified SD protocol(s) 2304 to determine if said protocol(s) is known and stored locally 2306, or if not then if it is known and stored remotely 2306. In some examples said protocol(s) 2304 is known and stored locally 2306, in which case it is recognized by the system and retrieved for use in establishing and maintaining SD communication and control 2310, and remote control proceeds 2310. In some examples said protocol(s) 2304 is known but not stored locally 2306, in which case it is recognized either system and retrieved 2307 from remote protocol storage 2308 (such as in some examples in a server[s], in some examples in a protocol database[s], in some examples in a protocol library[ies], in some examples in a protocol access service[s], in some examples in another storage device[s]) for use in establishing and maintaining SD communication and control 2310, and remote control proceeds 2310.

In some examples said protocol(s) 2304 is not known 2306 2307 and/or not retrievable 2309 then a uniform standard protocol is retrieved and used to generate a protocol (herein named “generated protocol”) based upon said device's device profile 2311 (as described elsewhere). In some examples said generated protocol 2311 is successful enough to use it in establishing and maintaining SD communication and control 2310, and remote control proceeds 2310. In some examples said generated protocol 2311 is successful enough to be used 2310 and is then saved for future re-use 2313 2312 in said remote protocol storage 2308 (as described elsewhere). In some examples the attempt to generate a protocol 2311 fails 2313 and in that case AKM steps are employed 2314 (as described elsewhere); if said AKM steps succeed 2314 then the resulting solution 2314 is used in establishing and maintaining SD communication and control 2310, and remote control proceeds 2310; but if said AKM steps fail 2314 then the AKM error process initiates 2314, and an appropriately worded error message is displayed to the user 2315.

In some examples a generated protocol 2311 is created by utilizing a uniform standard protocol and data in a device profile. In some examples said uniform standard protocol is stored locally 2306, and in some examples said uniform standard protocol is retrieved from remote protocol storage 2308. In some examples said generated protocol 2311 is created by factoring and abstracting common elements, instructions, commands, data types, etc. out of the uniform standard protocol and the specific SD's device profile, and then generating a protocol using the common elements 2311. In some examples said generated protocol 2311 is created by factoring and abstracting common elements, instructions, commands, data types, etc. out of the uniform standard protocol and the specific SD's device profile, and then creating a translation table using the common elements 2311 and writing said translation table to memory with said translation table used to establish and maintain SD communication and control 2310 (as described elsewhere). In some examples identifiable common elements include common elements in protocols such as in some examples identification(s), in some examples user IDs, in some examples create, in some examples select an instruction, in some examples perform an instruction, in some examples provide state information, in some examples set an alarm or an alarm condition; in some examples terminate a session, in some examples other common elements can be used instead of or in addition to these examples; non-common elements are discarded; and a new “common protocol” is generated based on the common elements.

In some examples a third-party (such as in some examples the vendor of the SD, in some examples a developer of similar SD protocols, in some examples a developer of standard protocols, in some examples a user of that SD, or in some examples another third-party) provides information such as which elements of the SD's protocol are unique and which are common. In some examples a generated protocol 2311 may be created by an application or a module that is designed to recognize, identify and extract common elements from one or a plurality of unknown protocols.

In some examples after said generated protocol 2311 has been generated, it is used to establish and maintain SD communication 2310, and remote control proceeds 2310; and in some examples said generated protocol 2311 is used to establish and maintain SD control 2310, and remote control proceeds. Therefore, in some examples a CD will support retrievable protocols A through N while a specific SD runs protocol X, and the two devices may still establish CD's remote control of said SD using a generated protocol 2311 based common elements between a uniform standard protocol and protocol X. As a result, some CDs can establish remote control of some SD's that run different and unknown protocols without needing to develop (ahead of time and by a separate developer or by a separate development effort) a unique protocol or interface for that combination of CD and SD. In addition, said generated protocol 2311 can be saved 2312 in remote protocol storage 2308 for future retrieval 2307 and re-use 2310 by that combination of CD and SD. As a result in some examples differences in some communications protocols and some control protocols may be abstracted out in a system, method or process that provides for connecting some CDs with some SD's in some examples; and a system, method or process that provides for some CDs to control some SD's in some examples. In addition, in some examples the protocols of new CDs and new SD's may be written to a set of common elements that fit said protocol generation capability 2311 and at least approximate a uniform standard protocol, and thereby new devices may be made capable of communications and remote control in an easier and more direct process.

In some examples these systems, methods and processes may be implemented with hardware; in some examples they may be implemented with software (such as in some examples program code, in some examples instructions, and some examples modules, in some examples services); and in some examples they may be implemented with a combination of both hardware and software (such as in some examples a server running an application and storing a database, in some examples a service, in some examples a protocol generation application). In some examples these may take the form of software that runs on hardware and can access stored data so they become an apparatus or machine for practicing this system, method or process.

Control and viewer applications: FIG. 60, “RCTP—Control and Viewer Applications,” illustrates some examples in which control applications 2346 2353 2359 and/or viewer applications 2347 2355 2360 are run in some examples by one or a plurality of CDs (controlling devices) 2344, in some examples by one or a plurality of SD's (subsidiary devices) 2352, in some examples by one or a plurality of servers or remote services 2356, and in some examples by one or a plurality of specialized SD servers or services 2350 (as described elsewhere). Said control applications and/or said viewer applications can be requested and downloaded in some examples from remote storage 2349, in some examples from an optional SD server 2350, in some examples from a subsidiary device 2352, in some examples from a server or a service 2356, and in some examples from an SD server or service 2350. Said control applications and/or said viewer applications can be requested and downloaded in some examples by means of a browser 2345 2353 2358 from sources, or by other means as described elsewhere. After being downloaded said control applications and/or viewer applications can be stored locally for faster future retrieval and use, in some examples by CDs 2344, in some examples by some SD's 2352, in some examples by servers or services 2356, and in some examples by SD servers 2350.

Said control application(s) 2346 2353 2359 may be used in some examples for initiating and/or terminating a control session; in some examples for gathering local control information from a subsidiary device, in some examples for sending and/or receiving control information; in some examples for sending and/or receiving control instructions or commands; or in some examples for other known remote control purposes or functions. Said viewer application(s) 2347 2355 2360 may be used in some examples for initiating and/or terminating a session; in some examples for initiating and/or terminating the viewing of a device's interface; in some examples for requesting, sending or receiving a device's current state; in some examples for actively or periodically monitoring a device's current state; or in some examples for other known remote control purposes or functions. In some examples said control application(s) and/or viewer application(s) may be run from or within a browser 2345 2353 2358; in some examples said browser-based application(s) may provide all or a subset the functions and features of a separate control application(s) 2346 2354 2359; and in some examples a separate control application(s) and/or viewer application(s) may provide all or a subset the functions and features of a device's own control interface (s) 2346 2347 2354 2355 2359 2360

In some examples the control application(s) 2346 2354 2359 that run on one or a plurality of CDs 2344, one or a plurality of SD's 2352, one or a plurality of servers 2356, and/or one or a plurality of SD servers 2350 are requested and downloaded by processes that are described elsewhere. In some examples control application(s) and/or viewer application(s) download requests are sent 2362 by a CD 2344, and control application(s) and/or viewer application(s) are received 2363 by a CD 2344. In some examples control application(s) and/or viewer application(s) download requests are received 2366 by a SD 2352, and control application(s) and/or viewer application(s) are sent 2367 by a SD 2352. In some examples control application(s) and/or viewer application(s) download requests are received 2368 by a server 2356 or a database 2349, and control application(s) and/or viewer application(s) are sent 2369 by a server 2356 or a database 2349. In some examples control application(s) and/or viewer application(s) download requests are received 2364 by an (optional) SD server 2350, and control application(s) and/or viewer application(s) are sent 2365 by an (optional) SD server 2350. In some examples control application(s) and/or viewer application(s) download requests are sent by a SD 2352, and control application(s) and/or viewer application(s) are received by a SD 2352. In some examples control application(s) and/or viewer application(s) download requests are sent by a server 2357, and control application(s) and/or viewer application(s) are received by a server 2357.

In variations, in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include an individual requests, or any combination or subset of a plurality of requests such as in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include device profiles; in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include DI (device interfaces); in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include protocols or other data required to establish communications; in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include protocols, device instructions, or other data required to establish and maintain remote control; in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include device instructions or other data required to generate a protocol; in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include data required to perform features or functions relating to RCTP systems, methods and/or processes; in some examples the downloads requested 2362 2366 2368 2364 and sent 2363 2367 2369 2365 may include any subset of other data required to perform features or functions relating to RCTP systems, methods and/or processes;

Alternatively, in some examples one or a plurality of download requests are received by remote storage 2349, and said requested downloads are sent by remote storage 2349. Alternatively, in some examples one or a plurality of download requests are received by a CD 2344, and said requested downloads are sent by a CD 2344. Alternatively, in some examples one or a plurality of download requests are received by a Teleportal Utility (as described elsewhere), and said requested downloads are sent by a Teleportal Utility.

Initiate SD Control and Viewer Applications: As described elsewhere, in some examples a control application(s) and/or a viewer application(s) are utilized for RCTP systems, methods and processes; while in some examples these are not utilized. Some examples of the process of retrieving and running said control application(s) and/or viewer application(s) are illustrated in FIG. 61, “RCTP—Initiate SD Control and Viewer Applications,” which includes a CD 2321 that requires a control application and/or a viewer application for RCTP control of an SD 2322 that also requires a control application and/or a viewer application.

Said examples begin when a user selects an SD for remote control 2323 (as described elsewhere), which (optionally and if needed) retrieves the device profile 2323 from either local storage 2320, remote storage 2320, or directly from a subsidiary device 2322. In some examples if the required control application and/or viewer application are stored locally 2324, they are retrieved directly and executed 2327. In some examples if the required control application and/or viewer application are not stored locally 2324, they are retrieved 2326 from remote storage 2320, and executed 2327. In some examples the required control application and/or viewer application are not stored locally 2324, then in some examples they may be auto-retrieved 2325 2326 directly as one step in selecting a specific SD, auto-downloaded from remote storage 2320, or retrieved directly from the SD 2322, and executed 2327. In some examples the required control application and/or viewer application are not stored locally 2324, then in some examples they may be manually retrieved by means of a browser 2325 which utilizes a hyperlink, bookmark, button, widget, servlet, search, or other web navigation to open a Web page 2325 that lists the appropriate control application and/or viewer application required so that the user may select it and retrieve it 2326 from remote storage 2320 by downloading, and then execute said downloaded application(s) 2327. Alternatively, in some examples the required control application and/or viewer application are not stored locally 2324, then in some examples they may be manually retrieved by means of remote control interface 2325 or application 2325 which utilizes a button, menu, widget, servlet, search, or other user interface component that lists the appropriate control application and/or viewer application required by the selected SD so that the user may select it and retrieve it 2326 from remote storage 2320 by downloading, and then execute said downloaded application(s) 2327.

Alternatively, a remote control interface may be generated under program control 2327 such as by Java commands, such as in some examples when the required control application and/or viewer application are not stored locally 2324 and they are also not retrievable remotely 2320; or as in some examples when a uniform remote control interface is desirable. In some examples said generated remote control interface can include a subset of factored standard commands based on each SD's retrieved device profile 2320 2322 (such as in some examples turn on, end [control session], exit, pause, suspend, open, run, display, scroll, highlight, link, click, use, edit, save, record, play, stop, fast-forward, fast reverse, pan, tilt, zoom, look up, find, contact, connect, communicate, attach, transmit, disconnect, copy, combine, distribute, redistribute, broadcast, charge, bill/invoice, make payment, accept payment, etc.). Additionally and optionally, in some examples said generated remote control interface may include a uniform interface (as described elsewhere such as in FIGS. 183 through 187) that may be adapted to the specific devices in use (as described elsewhere such as in FIGS. 184 and 185). In some examples a generated interface 2327 may include only a control application 2327, and in some examples a generated interface 2327 may include only a viewer application 2327, and in some examples a generated interface 2327 may include both a control application 2327 and a viewer application 2327.

In some examples the SD 2322 does not need a control application and/or viewer application 2334, in which case it continues processing said CD requests and instructions 2327 2333 as described in FIG. 62 2338. In some examples a selected SD needs a control application and/or viewer application 2334 and has that stored locally 2335, in which case it retrieves said application(s) and runs it 2336. In some examples the selected SD needs a control application and/or viewer application 2334 and does not have that stored locally 2335, in which case it notifies the CD 2338 that it needs a required control application and/or viewer application 2335 2338; in which case the CD can retrieve 2329 the device's required application(s) and download said application(s) 2330 to the SD 2337 where the SD can execute the required application(s) 2336. In some examples, after said required control application and/or viewer applications have been executed 2336 said CD requests and instructions 2327 2333 are processed as described in FIG. 62 2338.

For an illustration, in some examples a user of a CD 2321 selects a specific SD 2323 and its control application is not available on the CD 2324. In some examples a manual process is employed to retrieve and execute said control application. In some examples a web browser is manually opened 2325 on a remote system 2320 which provides its home page. In some examples downloadable SD control applications are accessible from said home page 2325 2320 by means of a hyperlink, a menu, a widget, a servlet, a search field, a support page, a downloads page, or other known web navigation means. In some examples a request for the category or list of downloadable SD control applications 2325 2320 is made using a web navigation means, and the downloadable SD control applications is displayed such as in a list of hyperlinks, a pulldown list, or other known web selection means. In some examples the specific selected SD's control application is selected for download from a known web selection means 2326 2320, and the SD control application is downloaded to the CD. In some examples the CD runs the downloaded application by clicking on it or activating it by other known means 2327.

For another illustration, in some examples of a user of a CD 2321 selects a specific SD 2323 and its control application is not available on the CD 2324. In some examples an automated process is employed to retrieve and run said control application. In some examples the selection of said SD 2323 auto-retrieves its device profile 2324 such as in some examples from local storage 2321, in some examples from remote storage 2320, and in some examples from a selected SD 2322. As described elsewhere, in some examples said device profile includes the name and address of its control application (and/or its viewer application) so the SD selection process includes utilizing said data to auto-retrieve the SD's control application 2325 2326 which in some examples is remote storage 2320 and in some examples is the SD 2322. In some examples the SD control application is a compressed file (such as a zip file) in which case the retrieved file 2326 is auto-extracted and executed 2327.

In some examples when said control application runs 2327, and/or viewer application runs 2327, the control application and/or viewer application sends a request to the SD 2333 and said SD parses and attempts to run the request 2333, which in some examples is a device control request 2333, and in some examples is a viewer (device monitoring) request 2333. In some examples CD requests 2327 2333 may include session creation; instructions, commands or requests within a created session; session deletion; or session timeout. In some examples CD requests may include other processing as described elsewhere, such as in some examples in FIGS. 62 and 63. In some examples communications paths 2323 2326 2327 2320 2333 2335 2328 2330 2337 may be secure (e.g., encrypted), and in some examples communications paths 2323 2326 2327 2320 2333 2335 2328 2330 2337 may be non-secure. In some examples multiple communications paths 2323 2326 2327 2320 2333 2335 2328 2330 2337 may operate within a single session.

Control Subsidiary Device: FIG. 62, “RCTP—Control Subsidiary Device,” illustrates some examples of remote control of an SD by a CD. In some examples an SD has been selected 2376 as described elsewhere, and said CD sends a connection control request to said SD 2377. In some examples an SD server 2378 was used to select an SD as described elsewhere, and said CD in some examples sends a connection control request to said SD by means of the SD server 2379, and in some examples said CD sends a connection control request directly to said SD 2377. In some examples the appropriate device profile, control application(s) and/or viewer application(s) have been retrieved and executed as described elsewhere, and said application(s) is used to send a connection control request to said SD 2377 2379. In some examples said connection control request 2377 2379 is sent via communications paths as described elsewhere to initiate a control session; using a messaging system and protocol that the SD supports and a message format that the SD can receive, parse and act upon. In some examples a control session is the period during which an SD is available for control by a CD. In some examples a control session continues after a controlling CD has exited, during which the SD remains active and available for control, until the SD's control session reaches the end of a timeout period. In some examples a control session can be enabled by any remote control technology such as in some examples Microsoft's Terminal Services, in some examples Modbus, in some examples UPnP, in some examples a vendor's proprietary communications and/or control protocol, in some examples a vendor's proprietary adaptation of a standard protocol, in some examples any other known communications and/or remote control technology or application. In some examples an SD receives a connection control request 2382, and (optionally) the CD, SD and/or identity may be authenticated 2383 and/or authorized 2383 using known authentication processes or TP authentication and authorization processes described elsewhere. In some examples after (optional) authentication 2383 the CD connects to the SD 2384 using in some examples a known communications protocol and in some examples a known control protocol; and said protocols are retrieved from memory or storage (whether local or remote) and employed instead connection. In some examples said communications protocol and/or said control protocol are unknown and therefore may be generated to establish said connection and control 2384, as described elsewhere. If a protocol is generated and used to establish a successful connection 2384 it may be stored in a pre-determined library of protocols (as described elsewhere) for future remote control sessions between that type of CD and that type of SD.

In some examples it is after a CD connects to a SD 2384 that a control application 2385 and/or a viewer application 2385 are executed. In some examples it is after a CD connects to a SD 2384 that a DI (Device Interface) is downloaded and displayed on the CD's screen 2389, or other SD data are retrieved and displayed as needed 2389. In some examples a control application 2389 may display said DI and other data on a CD's screen 2389. In some examples a viewer application 2389 may display said DI and other data on a CD's screen 2389. In some examples other known means are utilized to display on a CD's screen means for remote control 2389 such as an interface that lists the available remote control options. In some examples different remote control components, widgets, visual interfaces, etc. may be included in a CD's control screen 2389 for each type of remotely controlled SD 2393; for one example, if the SD contains a PTZ camera then the CD screen may include a compass rose so the camera's Pan, Tilt and Zoom may be remotely controlled; for another example if the SD contains a thermostat then the CD screen may include a vertical (or optionally horizontal) slider with fahrenheit and or centigrade temperature markings with an indicator that a user may move to set a desired temperature; for another example if the SD is a PC then the CD screen may display the entire SD's interface by means such as RDP (Remote Desktop Protocol) for direct user control of the SD PC. In some examples use may (optionally) be monitored 2386 and logged 2386 by known means such as in some examples when said use has been set up by an SD server (as described elsewhere) 2386, in some examples when a user pays for use 2386, in some examples when use is based on a membership or a subscription 2386, in some examples when use is free but includes retrieving and displaying sponsored marketing or messages 2386, and in other types of uses where it is desirable to monitor 2386 and/or log 2386 use(s). Said (optional) monitoring data 2386 and/or (optional) log data 2386 may be communicated by one or a plurality of networks to the appropriate monitoring and/or logging application or facility where said data is received and stored (such as in some examples 2508 2507 in FIG. 69).

In some examples a CD is now capable of controlling an SD, in which the user of the CD can operate the SD to perform any available SD function (such as its features, functions or applications; or settings for any of those features, functions or applications), or use any desired SD resource (such as play, use or edit its stored content) that is available for remote control 2389 2390. In some examples the CD displays 2389 the SD's control panel; in some examples the CD displays 2389 the SD's user interface; in some examples the CD displays 2389 an adapted or third-party developed version of the SD's control panel or user interface; in some examples the CD displays 2389 buttons, icons, GUI interface, lists, control panel, or menus that displays SD instructions, commands and features; and in some examples the CD displays 2389 a subset of the SD's full set of controls. In some examples the CD's display 2389 can be designed and configured in any number of known ways to include any or all of the available SD controls that may be utilized for remote control from a CD.

In some examples the CD's display 2389 can include a “show all” button, link or command to list all of the currently available SD commands or instructions that may be utilized for remote control of the SD; in some examples said “show all” list may be in alphabetical order; in some examples said “show all” list may be a hierarchy; in some examples said “show all” list may be in frequency-of-use order; in some examples said “show all” list may be a multi-level menu; and in some examples said “show all” list may be in a different order or organization. In some examples said “show all” list may be pre-determined, saved and retrieved from storage; while in some examples said “show all” list may be constructed when requested by retrieving the CD's display 2389 from memory, then sorting and reorganizing it in the order and format requested, for display and presentation on demand. In some examples said “show all” list may be searchable by keyword, or by a keyword string. In some examples said “show all” list includes labeled choices that the user may select individually 2390 to control the SD.

In some examples each displayed 2389 or listed 2389 SD instruction 2390, command 2390, feature 2390, icon 2390, GUI interface 2390, widget 2390, etc. has associated with it an SD control command that effects the SD to perform that specific step or function. In some examples the CD's user can enter an SD control instruction 2390 that corresponds to an area of interest by selecting a button 2389, icon 2389, GUI interface component 2389, listed choice 2389, control panel component 2389, menu choice 2389, etc. from the available choices. In some examples said control instruction 2390 selects the SD command associated with said control instruction 2390 and determines if translation into a specific SD control command is required 2391. In some examples no translation is required 2391 and the SD command associated with said control instruction 2390 is transmitted to the SD 2392. In some examples the CD interface 2389 displays one or a plurality of SD instructions 2390 that require translation 2391 2399 into SD commands 2399 before being transmitted to the SD 2392 (with said translation 2399 described in more detail elsewhere, such as in FIG. 63). Alternatively, in some examples translation of an SD instruction 2390 is required 2391 2399 and said translation is performed at the SD 2391 2399 after said SD instruction 2390 is transmitted to the SD. Alternatively, in some examples translation of SD instruction 2390 is required 2391 2399 and said SD instruction 2390 is transmitted to an SD server or a third-party application or service which performs said translation remotely 2391 2399.

In some examples a CD can utilize said remote control means (such as in some examples a control application, in some examples of viewer application, in some examples both a control application and a viewer application, in some examples a generated remote control interface, in some examples no control or viewer application and no generated remote control interface) to control two or a plurality of SD's simultaneously (as described elsewhere such as in FIG. 56). In some examples a user can select between the plurality of simultaneously controlled SD's the one SD that the user wants to control remotely at a given moment. In some examples a user can select between the plurality of simultaneously controlled SD's the two or a plurality of SD's that the user wants to control remotely at a given moment. In some examples a user can select two or a plurality of remotely controllable SD's to perform a single remote control instruction that corresponds to said selected SD's; such as in some examples to open two or a plurality of SD's simultaneously, in some examples to close two or a plurality of SD's simultaneously, in some examples to start the recording function of two or a plurality of SD's by entering a single remote control instruction; and in some other examples to perform a different but commonly available remote control feature or function with two or a plurality of SD's simultaneously.

In some examples the SD remote control instruction selected 2390 and transmitted 2392 (whether or not translated into an SD command 2391 2399) is received by the SD 2393, where it is utilized to perform the selected instruction 2393. In some examples performing an instruction includes entering a mode 2393; in some examples performing an instruction includes executing a command 2393; in some examples performing an instruction includes running an SD application 2393; performing an instruction includes running an SD application 2393 and loading data (or in some examples a data file, or in some examples data attributes or conditions) from said SD or from a remote source; in some examples performing an instruction includes another feature 2393, function 2393, capability 2393, etc. of the remotely controlled SD by known remote control means.

In some examples an SD receives a remote control instruction 2393 and performs it 2393 resulting in a new SD state 2394, SD condition 2394, SD data 2394, etc. In some examples said updated SD state, condition, data, etc. is transmitted to the CD 2394 under automated program control. Alternatively, in some examples an SD 2393 acquires and transmits its updated state 2394 when it receives 2394 when it receives an instruction to do so 2390 that is transmitted by a CD 2390 2391 2399 2392, and is received and executed by an SD 2393 2394.

In some examples said updated and transmitted SD state, condition, data, etc. does not need to be translated to be displayed 2389 and/or utilized 2389 2390 by said CD, so the updated and transmitted SD state, condition, data, etc. are transmitted to the SD 2394 2395 2389. In some examples said updated and transmitted SD state, condition, data, etc. needs to be translated in order to be displayed 2389 and/or utilized 2389 2390 by said CD, therefore in some examples said CD receives 2395 said SD's transmission 2394, determines if translation into a specific CD control application protocol or interface 2389 is required 2395, and performs said translation 2396 (with said translation 2396 described in more detail elsewhere, such as in FIG. 63). In some examples no translation is required 2395 2389 of SD transmitted update(s), while in some examples translation is required 2395 2396 2389, and the SD's updated state, condition, data etc. is utilized to update the CD's control screen 2389 four entering subsequent SD remote control instructions 2390. Alternatively, in some examples translation of SD updates 2394 is required 2395 2396 and said translation is performed at the SD 2395 2396 before said SD updates 2394 are transmitted to the CD. Alternatively, in some examples translation of SD updates 2394 is required 2395 2396 and said SD updates 2394 are transmitted to an SD server or a third-party application or service which performs said translation remotely 2395 2396.

In some examples a CD remains at an SD interface 2389 where the CD's user may enter SD controls or instructions 2390 until the control session is ended 2397 2398, exited 2397 2398, or terminated 2397 2398. In some examples said control session may be ended 2397 2398 in some examples by timing out at the end of a period where an SD is not used; in some examples by being ended under program control when determined by an SD server, an SD service or another source; in some examples by timing out or being terminated when determined by the owner of the SD being used; in some examples at the end of a predetermined block of time such as for the free use of an SD in high demand; in some examples by other preprogrammed criteria; and in some examples by manual command(s).

Translate CD Instructions to SD, and SD Outputs to CD: Turning now to FIG. 63, “RCTP—Translate CD Instructions to an SD, and SD Outputs to CD,” in some examples translation is not required for CD instructions to an SD, and in some examples translation is not required for SD outputs to a CD—which provides direct means for remote control of an SD by a CD. In some examples, however, a networked SD capable of control can be managed and controlled by a CD even if said CD does not locally maintain in some examples control applications, and some examples viewer applications, in some examples communication protocols, or in some examples the SD's control instructions for remotely controlling every controllable SD. In some examples translation provides means for one or a plurality of RCTP implementations that may be implemented in one or a plurality of combinations of CDs and SD's.

In some examples a CD's instructions are translated into an SD's commands. Said process starts with a CD's control screen 2402 for remote control of an SD (as described elsewhere). In some examples a CD user enters a remote control instruction 2403 to be transmitted to an SD. In some examples a control instruction 2403 is specific to a unique SD 2410, and in some examples a control instruction 2403 includes identification of the unique SD 2410 under control and its address, such that a CD communicates directly with an SD. In some examples said control instruction 2403 does not need translation 2404 such as in some examples because it is already an SD control command; and said control instruction 2403 is transmitted 2406 directly to said SD 2410 to perform the instruction 2403. In some examples said control instruction 2403 requires translation 2404 2405 which in some examples may be performed by the CD 2401 2418, in some examples may be performed by the SD 2410 2418, in some examples may be performed by an SD server 2418, in some examples may be performed by a TPU server 2418, in some examples may be performed by a third-party SD service 2418, and in some examples may be performed by another application or resource.

Though said translation can be performed 2401 2410 2418 in one of a plurality of apparatuses, applications or services; in some examples the instruction is translated into an industry standard protocol 2401 2410 2418, in some examples the instruction is translated into a proprietary protocol 2401 2410 2418, and in some examples the instruction is translated with a custom integration between the devices 2401 2410 2418. In some examples a device profile is retrieved 2419 from remote storage 2424; in some examples an industry standard protocol is retrieved 2419 from remote storage 2424; in some examples a proprietary protocol is retrieved 2419 from remote storage 2424; in some examples a custom integration between the devices is retrieved 2419 from remote storage 2424; in some examples a list of SD specific commands is retrieved 2419 from remote storage 2424; in some examples a control application is retrieved 2419 from remote storage 2424 that contains the SD's commands; and in some examples other means are used to retrieve 2419 the SD's specific commands 2424. In some examples the control instruction 2403 is translated into an industry standard protocol instruction 2419 that corresponds to that SD; in some examples the control instruction 2403 is translated into a proprietary SD-specific protocol instruction 2419 that corresponds to that SD; and in some examples the control instruction 2403 is translated into an SD-specific command 2419 that corresponds to that device or model.

In some examples a translation 2419 does not succeed and in some examples a protocol is generated 2420 (as described elsewhere, such as in FIG. 59), which in some examples retrieves a uniform standard protocol that is used to generate a protocol (named a “generated protocol”), and thereby determine an instruction 2403 2404 2405 2406 that corresponds to that SD 2410. In some examples a translation 2419 does not succeed and a protocol is not generated 2420, and in some examples a subset of device commands is utilized 2421 rather than a complete set of device commands (as described elsewhere, such as in FIGS. 59 and 60), and thereby determine an instruction 2403 2404 2405 2406 that corresponds to that SD 2410. In some examples a translation 2419 does not succeed, a protocol is not generated 2420, and a subset of device commands is utilized 2421 and other known means 2422 are utilized, and thereby determine an instruction 2403 2404 2405 2406 that corresponds to that SD 2410. In some examples a subset of device commands can be utilized 2421 such as an SD 2410 that is capable of features, functions and/or attributes not included in the retrieved 2424 device profile, industry-standard protocol, proprietary protocol, custom integration, list of SD specific commands, control application with SD's commands, etc.—and in these examples one or a plurality of defaults can be set 2404 2405 2419 2420 (with or without default attributes). In some examples translation processing fails 2418 2419 2420 2421 2422 2424 and in that case AKM steps are employed 2423 (as described elsewhere); if said AKM steps succeed 2423 then the resulting SD instruction or SD command is used 2405 2406 and remote control proceeds 2410; but if said AKM steps fail 2423 then the AKM error process initiates 2423, and an appropriately worded error message is displayed to the CD user 2425.

In some examples the SD control command 2406 is transmitted to the SD 2406. In some examples the SD control command 2406 is transmitted as one individual instruction 2406 and in some examples the SD control command 2406 is a mass transmission of a plurality of instructions 2406 in the order entered by the CD's user. In some examples the SD remote control instruction transmitted 2406 (whether or not translated into an SD command 2404 2405 2418) is received by the SD 2410 2411, where it is utilized to perform the selected instruction 2411 (as described elsewhere). In some examples said SD command is performed successfully 2412 resulting in a new SD state 2414, SD condition 2414, SD data 2414, etc. (as described elsewhere). In some examples said SD command is not performed successfully 2412 and in this case an (optional) step is for the SD to attempt translation of the SD command received into an SD command that can be performed 2418 2419 2420 2421 2422 2424 2411. Alternatively, in some examples said SD command is not performed successfully 2412 and in this case an (optional) step is to notify the CD 2425 2401 so that it may attempt to re-enter the SD remote control instruction 2403 and re-translate the SD instruction 2418 2419 2420 2421 2422 2424 2411 (whether said re-translation is processed locally by the CD 2401 2418 or remotely by an SD server 2418 or another remote resource 2418) into an SD command that can be transmitted and performed 2406 2411.

In some examples the output from the new SD state, SD condition, SD data, etc. (herein “SD update”) is compatible with the CD's remote control 2413 2402 and said SD update is transmitted to the CD 2414. In some examples the output from the SD update is not compatible with the CD's remote control 2413 2402 and in this case an (optional) step is to attempt translation of the SD update data into SD data that is compatible with the CD's remote control 2418 2419 2420 2421 2422 2424 2414 2402. In some examples said SD update is translated into an industry-standard protocol 2419 (as described elsewhere); in some examples said SD update is translated into a proprietary protocol 2419 (as described elsewhere); in some examples said SD update is translated with a custom integration between the devices 2419 (as described elsewhere); in some examples said SD update is translated with a generated protocol 2420 (as described elsewhere); in some examples said update is translated with a subset of device commands 2421 (as described elsewhere); and in some examples said update is translated by other known means 2422 (as described elsewhere). Alternatively, in some examples the output from the SD update is not compatible with the CD's remote control 2413 2402 and in this case an (optional) step is to transmit the incompatible SD update data 2414 to the CD 2401 2402 where it may be re-translated 2418 2419 2420 2421 2422 2424 2414 2402 (whether said re-translation is processed locally by the CD 2401 2418 or remotely via an SD server 2418 or another remote resource 2418) into compatible SD update data that may be utilized by the CD 2402 2403.

In some examples when translation is utilized the protocol used to translate the CD's remote control instructions into SD commands 2405 2418 2406 2411 is the same protocol that is used to translate the SD's update data for use by the CD's control screen 2413 2418 2414 2402. In some examples when translation is utilized different protocols are used; that is, one protocol is used to translate the CD's remote control instructions into SD commands 2405 2418 2406 2411 while a different protocol is used to translate the SD's update data for use by the CD's control screen 2413 2418 2414 2402.

Virtual Teleportals on AIDs/AODs: Virtual Teleportals (VTPs) run on one or a plurality of AIDs/AODs (Alternative Input Devices/Alternative Output Devices, which are networked electronic devices as described elsewhere) that can't directly become a Teleportal, but have the capacity to run a VTP application or a web browser application that emulates one or a plurality of functions of a Teleportal. Depending on each device's capabilities they may also be able to use a VTP for other functions such as in some examples RCTP control of subsidiary devices.

In some examples VTPs may be considered as providing the opposite functionality to RCTP (Remote Control Teleportaling). RCTP enables TP devices to control subsidiary devices, while a VTP runs on one or a plurality of networked electronic devices to enable them to provide Teleportal functionality by connecting to and controlling TP devices. VTP's provide additional means for today's blizzard of new and complex networked electronic devices to utilize the Teleportals' ARTPM and their digital realities. This expands the overall productivity and value of a plurality of types of networked electronic devices by providing means to perform more functions at lower cost, without needing to buy additional devices than those that are already owned. (In some examples, however, these networked electronic devices, herein called AIDs/AODs, may directly run Teleportal features and functions, and when they do so, they substitute for TP devices.)

FIG. 64, “Virtual Teleportals on AIDs/AODs”: Some examples of Alternate Input Devices/Alternate Output Devices (AIDs/AODs) are illustrated in FIG. 64 as well as described elsewhere, which includes in some examples mobile phones, in some examples Web services such as social media and other Web services that enable applications, in some examples personal computers, in some examples laptop computers, in some examples netbooks, in some examples electronic tablets or e-pads, in some examples DVR's (digital video recorders), in some examples set-top boxes for cable television or satellite television, in some examples networked game systems, in some examples networked televisions, in some examples networked digital cameras that have the added ability to download and run applets, and in some examples other types of networked electronic devices. In some examples AIDs/AODs communicate by various means over one or a plurality of disparate networks to TP devices (as described elsewhere).

Together, FIG. 65, “VTP Processing (AIDs/AODs)” and FIG. 66, “VTP Connections with TP Devices” and FIG. 67, “VTP Processing on TP Devices” comprise a system, method and/or process whereby a user of an AID/AOD runs a VTP client that in some examples enables the selection of a TP device from one or a plurality of TP devices; and in some examples connects to a requested TP device (with optional security protection such as login, authentication, authorization, etc.). In some examples an AID/AOD running a VTP client may select and connect to a TP device directly, and in some examples connect to a TP device by means such as an SD server or a similar facility that provides access to a plurality of TP devices of various types and configurations, each with a plurality of different types of tools and/or resources (such as in some examples applications, in some examples digital content, in some examples services, and in some examples other types of resources), so that a specific AID/AOD may establish a VTP connection with one of a plurality of selectable TP devices. In some examples the requested TP device runs a VTP server (which may include one or a plurality of virtual machines) on said connected TP device that generates an appropriate VTP client interface (which may optionally be an adapted interface), wherein the VTP server transmits the VTP interface to the VTP client. In some examples the VTP client receives an appropriate TP device interface (which may optionally be an adapted interface) that is displayed on the AID/AOD (where “display” includes any and all media capabilities of the AID/AOD such as video and/or audio); in some examples the VTP client interface enables the user of the AID/AOD to act on the TP device (by means of the VTP client interface which may include means such as a pointing device, keyboard input, clicking, touching or tapping, voice input, etc.) to issue a command or provide input or data; and in some examples the VTP client monitors the VTP client interface for user actions and transmits command(s) and/or input(s) to the VTP server that is running on a TP device. In some examples the VTP server receives VTP client command(s) and/or input(s) (and may optionally determine the appropriate TP device processing to perform if command translation[s] is required), and passes said user command(s) (or a series of commands) with their associated input(s) to the TP device to execute the commands and perform the required actions. In some examples the VTP server receives TP device processed output(s) and formats and transmits it to the VTP client for display; and in some examples the VTP server adapts the TP device processed output(s) to provide an adapted interface for display by the VTP client on a specific AID/AOD. In some examples the VTP client monitors subsequent VTP client interface interactions for user actions that require additional TP device processing, which continues the above described process until it is terminated and/or exited.

In some examples this parallels known uses of a client and server system that utilize a single server to facilitate the simultaneous use of a plurality of clients. In some examples an AID/AOD may run one or a plurality of VTP's; in some examples a TP device may run one or a plurality of VTP servers; in some examples a VTP server may run a plurality of virtual machines that each support a separate AID/AOD and each virtual machine may execute a process that adapts the TP device's output to each specific AID/AOD. In some examples one or a plurality of VTP(s), one or a plurality of VTP server(s) and one or a plurality of TP UIA instances may combine to enable one or a plurality of AIDs/AODs to simultaneously receive adaptive interfaces while controlling and/or using one or a plurality of TP devices such as in some examples one-to-one (one AID/AOD to one TP device); in some examples many-to-one (a plurality of AIDs/AODs to one TP device); in some examples one-to-many (one AID/AOD to a plurality of TP devices); and in some examples many-to-many (a plurality of AIDs/AODs to a plurality of TP devices).

In some examples a TP device's output can be both adapted to a specific AID/AOD and also modified by means of additional post-processing such as in some examples utilizing post-processing to add advertising or other marketing messages; in some examples utilizing post-processing to blend in the appearance of a new person or object (such as a logo, a business building, a sign or another marketing image); in some examples utilizing post-processing to remove a person or object (such as a logo or marketing image); in some examples utilizing post-processing to change the behavior of an interface component such as a widget or a link (such as in some examples altering which vendor's online store receives a user's purchase selection); in some examples utilizing post-processing to make a combination of changes such as replacing displayed advertisements and changing the online store visited by any remaining advertisements; and in some examples performing other transformations.

Turning now to FIG. 64, “Virtual Teleportals on AIDs/AODs,” some examples 2524 are illustrated of Alternate Input Devices/Alternate Output Devices (AIDs/AODs) which in some examples include wired and/or wireless networked electronic devices such as in some examples mobile phones 2525 2526, in some examples Web services such as social media and other Web services that enable applications 2527, in some examples personal computers 2528, in some examples laptops 2529, in some examples netbooks, in some examples electronic tablets or pads 2530, in some examples DVR's (digital video recorders) 2531, in some examples set-top boxes for cable television 2531 or satellite television 2531, in some examples networked game systems 2532, in some examples networked televisions 2533, in some examples networked digital cameras that have the added ability to download and run applets 2534 2535 (such as is already common for camera enabled smart phones and camera enabled electronic pads), in some examples other types of networked electronic devices 2536 such as wearable electronic devices, servers, etc.

In some examples a communications link may include any means of transferring data such as in some examples a LAN 2537, in some examples a WAN 2537, in some examples a TPN (Teleportal Network) 2537, in some examples an IP network (such as the Internet) 2537, in some examples a PSTN (Public Switched Telephone Network) 2537, in some examples a cellular radio network 2537, in some examples an ISDN (Integrated Services Data Network) 2537, and in some examples another type network. In some examples an example task might include turning on one of these devices 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536, such as connecting it to a network 2537 and downloading a VTP 2538 and running a VTP 2538, including in some examples storing the downloaded VTP 2538 in the device's local storage for faster future use by that networked electronic device 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536.

VTP Processing (AIDs/AODs): Turning now to FIG. 65, “VTP Processing (AIDs/AODs),” in some examples one or a plurality of AIDs/AODs 2545 2546 may connect by one or a plurality of disparate networks 2544 with TP devices such as in some examples one or a plurality of LTP's 2547; in some examples one or a plurality of MTP's 2547; in some examples one or a plurality of RTP's 2548; in some examples one or a plurality of another type of networked electronic device 2550 (as described elsewhere); and in some examples to utilize an RCTP on an LTP 2547 or an MTP 2547, selecting and controlling one or a plurality of subsidiary devices 2549.

In some examples a VTP 2552 comprises a VTP server that runs on a TP device (such as in some examples an LTP 2547, in some examples an MTP 2547, and in some examples an RTP 2548) or in some examples runs on another type of networked electronic device (such as a TP Server 2550, Teleportal Utility, Teleportal Network Service, Web server 2550, Web service 2550, or other external means configured to provide Teleportal functions); a VTP client runs on one or a plurality of AIDs/AODs 2545 2546 (such as in some examples an application running within a web browser 2552, in some examples a downloadable application 2552, in some examples a purchased software application 2552 [e.g., an unmodifiable or customizable software product] that is sold by one or a plurality of vendors; in some examples an applet 2552, in some examples a component within an application 2552, in some examples a module within an application 2552, in some examples a browser-based interface to a web service 2552, in some examples a code-generated user interface and control application 2552, and in some examples known means other than illustrated herein); that are coupled by one or a plurality of disparate networks 2554 (such as in some examples the Internet 2544, in some examples a local area network 2544, in some examples a wide area network 2544, in some examples the public switched telephone network 2544, in some examples a cellular network 2544, and in some examples another type of wired and/or wireless network).

In some examples a VTP server is coupled to TP processing (as described elsewhere) performed by a TP device, by means of a TP command processing component that translates information from a VTP client into TP processing performed by a TP device; in some examples the output from said TP processing is processed for display by a VTP client by TP processing means as described elsewhere; and in some examples the TP command processing component transfers information in both directions between a TP device's network interface such as providing commands to TP processing as well as providing display output from TP output processing for VTP client display. In some examples a VTP server serves the needs or requests of one or a plurality of VTP clients, and may be instantiated in some examples as software, in some examples as hardware, in some examples as a software/hardware system or subsystem, in some examples as a specialized device such as a rack-mounted VTP server; and just as other servers do a VTP server may utilize any known form of technology or programming to provide services to clients.

In some examples a VTP client includes an information client (such as in some examples a web browser and in some examples other means as described elsewhere) capable of requesting 2553 and receiving a VTP app/applet 2553 from a VTP server or from another network-accessible source of VTP apps/applets, or from another accessible storage means. In some examples that information client provides sufficient identification of the requesting AID/AOD 2553 and (optionally) sufficient identification of the requesting user's identity 2553 so that the appropriate VTP app/applet may be selected (as described elsewhere such as in some examples FIG. 183 through FIG. 187) and downloaded to the AID/AOD. In some examples prior to download one or a plurality of validation(s) are performed 2554 such as in some examples identity authorization 2555, device compatibility with that specific VTP 2555, device capabilities such as its display interface 2555, communications protocol 2555, and/or other validations 2555. In some examples upon download 2556 and initial execution 2556 one or a plurality of validation(s) are performed 2554 2555. In some examples upon execution 2556 that information client defines a virtual machine environment that is hardware independent and operating system independent. In some examples the VTP client executes the downloaded VTP app/applet 2557 within its defined virtual machine environment to configure that AID/AOD as a “TP device controller” that connects over the network with a VTP server, and communicates over the network to send commands 2558 to the VTP server's TP command processing component 2559 and receive display output from it 2557 by means of those communications. In one example a VTP applet may be Java programming language code and the virtual machine environment can be created within a Java-enabled web browser.

In some examples a user employs a VTP client on an AID/AOD 2557 to enter commands 2558 (e.g., requests for service) that are transmitted over the network to a VTP server where a TP command processing component 2559 translates those commands into TP processing by a TP device or similar means 2559; and in some examples the TP device responds to those commands 2559 as described elsewhere; and in some examples the TP command processing component transfers back over the network, to the VTP client, the resulting display output from TP output processing for display by the VTP client 2557. In some examples a VTP client generates commands for monitoring 2558 in some examples a TP device 2559, in some examples an SPLS 2559, in some examples a focused connection 2559, and in some examples another process that a TP device performs 2559. In some examples a TP device responds to selected commands 2559 from a VTP client 2558 received over the network by a VTP server and continuously transfers the resulting output over the network back to the VTP client 2557. In one example a VTP client requests a focused connection with one of an SPLS's IPTR, such as with a specific identity, the TP device opens that focused connection and continuously updates that connection on the AID/AOD by means of its VTP server and the VTP client. In some examples alternative means may be employed such as process control in which a VTP client Java applet generates a message or command 2557 to a VTP server that includes an object manager that responds to the message or command 2558, and invokes a method that controls a process 2558 and/or monitors a process 2558 in a TP device, and provides an updated display for the VTP client 2557.

In some examples the VTP client and VTP server process remain open and connected unless manually ended 2560 2561, and in some examples the VTP client and VTP server process automatically end 2560 2561 after a timeout or other pre-specified ending trigger (and in some examples that automated ending trigger[s] 2560 may be edited and saved). In some examples if a VTP client is ended 2560 2561, exited 2560 2561 or terminated 2560 2561 that VTP client and its settings may be saved to a local AID/AOD device, which will provide faster and more direct VTP uses in the future.

In some examples another alternative may be to enable one VTP server on one TP device to support a plurality of AID/AODs simultaneously while they each run a separate VTP client. In other words, in some examples a plurality of AIDs/AODs 2545 2546 simultaneously each run a VTP client 2556 that together communicate with one VTP server that in turn utilizes 2557 2558 a single TP device's 2547 2548 2550 processing 2559, functions 2559, capabilities 2559, and outputs 2559 2557 to simultaneously support a plurality of separate VTP clients 2556 2557 2558, with each VTP client on one of a plurality of AIDs/AODs 2545 2546.

In some examples this parallels known uses of a client and server system that utilizes a single server to facilitate the simultaneous use of a plurality of clients. In one example of this a VTP server on a single TP device may enable multiple virtual machines 2559 in which each virtual machine contains a TP command processing component 2559 that translates the commands from one VTP client into TP processing by the TP device 2559; and in some examples the TP device responds separately to the commands 2559 from each one virtual machine in a VTP server; and in some examples the resulting display output from TP output processing of that one virtual machine's commands are transferred back over the network to the appropriate single VTP client, for display by that VTP client 2557 on its AID/AOD.

In some examples a VTP server has multiple virtual machines 2559 contained within, with each virtual machine capable of being connected to by one VTP client 2556 2557 2558 running on one AID/AOD. In some examples a user of a first AID/AOD runs a VTP client 2556 that has been previously downloaded and configured, which in turn communicates over one or a plurality of disparate networks and connects to a first virtual machine 2559 running on a TP device's VTP server. In some examples the user of that first AID/AOD employs its VTP client 2557 interface and the I/O means of that AID/AOD (such as in some examples mouse clicks, in some examples keyboard input, in some examples touch screen, in some examples voice recognition, and in some examples any other user I/O means) to input commands 2558, data 2558, etc. that are communicated to its respective virtual machine 2559 on a VTP server. In some examples the first virtual machine in the VTP server receives data from that first AID's/AOD's VTP client which is then processed by the TP device (as described elsewhere). In some examples a single refreshed display is produced by the TP device which the first virtual machine 2559 in the VTP server communicates to the VTP client in the first AID/AOD to update and refresh its display 2557; in some examples continuously updated video and audio are produced by the TP device which the first virtual machine 2559 in the VTP server communicates continuously to the VTP client in the first AID/AOD to continuously update the display of its video and the playing of its audio 2557; and in some examples other TP device processing may be output (such as in some examples bitmaps, in some examples images, in some examples user interface screens or component(s) of an interface screen(s), in some examples files, in some examples commands, in some examples other types or formats of data) which the first virtual machine 2559 in the VTP server communicates to the VTP client 2557 in the first AID/AOD for delivery to the user and/or for use by the user.

In some examples a second VTP client 2556 2557 2558 simultaneously interacts with one VTP server that runs a plurality of virtual machines 2559 within it, so that said second VTP client 2556 2557 2558 interacts with a second dedicated virtual machine 2559 within the VTP server. In some examples a plurality of VTP clients 2556 2557 2558 simultaneously interact with one VTP server that runs a plurality of virtual machines 2559 within it, so that each VTP client 2556 2557 2558 interacts with one dedicated virtual machine 2559 within the VTP server. In some examples by implementing a plurality of virtual machines 2559 that each correspond to one VTP client 2556 2557 2558, a single VTP server facilitates Teleportaling and TP device use by a plurality of AID/AOD users.

In some examples a TP device 2547 2548 may run a separate VTP server 2559 for each VTP 2553 2556 2557 that connects to it, with each VTP server capable of being connected to by one VTP client 2556 2557 2558 running on one AID/AOD 2545 2546. Therefore, a plurality of VTP clients 2556 2557 2558 on a plurality of AIDs/AODs 2545 2546 simultaneously interact with one TP device 2547 2548 that runs a plurality of VTP servers 2559 within it, so that each VTP client 2556 2557 2558 interacts with one dedicated VTP server 2559 within the TP device. In some examples by implementing a plurality of VTP servers 2559 that each correspond to one VTP client 2556 2557 2558, a single TP device facilitates Teleportaling and TP device use by a plurality of AID/AOD users.

In some examples a TP device 2547 2548 runs one or a plurality of VTP servers 2559 where each VTP server runs one or a plurality of virtual machines 2559 within it, so that each VTP server 2559 may interact with one or a plurality of VTP clients 2556 2557 2558. Therefore, in some examples a plurality of VTP clients 2556 2557 2558 simultaneously interact with a plurality of VTP servers on a single TP device 2547 2548 by means of a plurality of virtual machines 2559 within said VTP servers, so that each VTP client 2556 2557 2558 interacts with one dedicated virtual machine 2559 within the plurality of VTP servers. In some examples by implementing a plurality of VTP servers wherein each may run a plurality of virtual machines 2559 that each correspond to one VTP client 2556 2557 2558, a single TP device facilitates Teleportaling and TP device use by a plurality of AID/AOD users.

In some examples a VTP server connected to a network receives the output from TP output processing and compresses it before communicating it over a network to a VTP client; in some examples that VTP client receives and decompresses the data received from the VTP server; in some examples a VTP client compresses its data before communicating it over a network to a VTP server; and in some examples one or a plurality of known means for compressing and decompressing said data are utilized.

In some examples the output display area of a Teleportal is larger than the smaller screen size of a specific AID/AOD, and in such a case that VTP client sends specific data of the currently displayed area on the AID/AOD (the portion displayed with respect to the full output display area of a Teleportal) to the virtual machine in the VTP server; in such a case the virtual machine in the VTP server prioritizes the order of the visual display blocks communicated (such as in some examples first communicating the currently displayable area of the AID/AOD so that it is received and displayed first, in some examples second communicating the TP output display areas immediately adjacent to the currently displayed area of the AID/AOD so said adjacent areas are rapidly available in the event a user wants to scroll in any direction, and in some examples third communicating the remaining TP output display areas) so that the current area where a user is viewing is updated first. In some examples continuous video and audio are output by a TP device (such as in some examples from a focused connection, in some examples from a constructed digital reality, in some examples from a TPDP event, and in some examples from another TP process that provides continuous real-time data), and in that case the communication priority is to continuously update the displayed AID/AOD screen so that the available processing and bandwidth is focused on the current area and real-time interaction(s) viewed and/or listened to by a user.

In some examples a VTP client sends a command or other data and in such a case that is given priority over other communications such that the command is executed immediately before other operations and/or communications are continued, so that a dedicated virtual machine in a VTP server provides rapid responses to user commands. In some examples a VTP client issues a command that changes what will be displayed on the AID/AOD, and that in turn interrupts and ends any video and/or audio that are being sent by a VTP server, so that available processing and other resources, and available network bandwidth, may be directed to responding to said VTP client's command with the fastest speed available.

In some examples others alternatives for downloading a VTP may include in some examples detecting the presence of one or a plurality of local devices that may be controlled as a user moves into their proximity so that VTP control may be essentially transparent and in some examples “always on” (with such connectivity as described elsewhere such as with a TP URCI [Universal Remote Control Interface]); in some examples an AID/AOD with one or a plurality of VTP's may store one or a plurality of identifiers for controllable devices for which it has already downloaded and set up VTP control, and in such a case executing a specific device's VTP may prompt a user for authentication or credentials prior to taking remote control; in some examples an AID/AOD with one or a plurality of VTP's may store one or a plurality of identifiers for controllable devices for which it has already downloaded and set up VTP control, and in such a case automatically acquire remote control for one or a plurality of VTP controllable devices when a specific device's VTP is executed; in some examples a device may store one or a plurality of VTP's for controlling it and may download the appropriate VTP when requested by an authorized identity; in some examples when a device downloads an appropriate VTP to an AID/AOD for controlling it, it may initiate an authentication and authorization process to confirm and validate the identity of the user who is taking control; in some examples when a device downloads an appropriate VTP and authenticates the user who has taken control, that authorization may be saved and stored for future rapid re-use in the device in some examples and in the VTP on the AID/AOD in some examples.

In some examples to select a specifi