Category: Explaining Series

Explaining series: The new IOTA Whitepaper (vers. 1.3) summarized

Explaining series: The new IOTA Whitepaper (vers. 1.3) summarized

When dealing with innovative technology, it’s not always easy to understand the underlying mechanisms and technological features, explained by a whitepaper, as it is written with and for specialists with a thorough understanding of math and statistics.

The following article aims for outlining the most important parts.

Since most of the insights of the whitepaper are not easy to digest, I couldn’t erase all complex parts.

Disclaimer: To catch all information, you should read the whitepaper, as I can not guarantee that I included all necessary information.
Italic written parts are citings from the original document.

1 Introduction and description of the system

For IOTA, we have different layers of necessary information on top of that, that should be kept in mind, when trying to understand the field of application and functionality.

  • The IoT (Internet of Things), a global network of devices, that interact for different purposes with each other. Unlike the Internet, the IoT is not organized in one topology, that is synchronous and connected intrinsically, like a flat field. The IoT is more like a connected infrastructure that is comparable to a bumpy geomorphological landscape, with hills, holes, continents, that are connected via several connectivity types, with divided subnetworks, offline-areas, and usually higher latencies compared to the Web. But also harder to attack with DDoS- attacks due to its natural barriers and mesh-net connection.
  • Blockchains. A peer to peer network, that enables many use-cases, often bears a monetary incentive for miners or investors, and also could be a key-technology for disrupting certain industries, such as Fintech or bigdata-markets. Blockchains are running on the Internet.
  • Its derivative DLT (Distributed Ledger Technology), a technology that shares some similarities with blockchains. The difference is the chronological order, that is used to store information and value in blocks and chains, while distributed ledger technologies share these values or information between all nodes in their network. That means, that although all blockchains have distributed values, they are usually bound to the chronological order of blocks, ordered in a chain, which is not the case in DLT’s. That difference makes IOTA architecture different in many regards, while it keeps the advantages of blockchains.
  • The DAG (Directed Acyclic Graph). A mathematical graph, that is used for ordering and updating nodes or edges. Important here is that along its path, dependencies exist, that make it impossible for a confirmation of a transaction to “travel” back to its origin. Cycles are not allowed in a DAG. That means that once information is induced, it’s impossible to change it afterwards, which is an important part of the consensus-rule.
  • IOTA, the software/cryptocurrency/protocol that is based on the Whitepaper of the Tangle by Prof. Serguei Popov.
    IOTA is made for the machine economy in the IoT, so it’s natural habitat is a mesh-network, where asynchronicity and offline-clusters are part of the system. The Tangle that is based on a DAG for IOTA brings up the following rule: transactions between nodes can only be conducted if nodes reference two other unconfirmed transactions.
  • Therefore, users act as both: miners and validators. Consensus is not decoupled, like in most Blockchains.

Along their way, unconfirmed transactions are referenced. If they are referenced by good nodes, they earn “weight” and “cumulative weight” (Figure 1, bold numbers). The latter is the most important measurement for transactions on a way to its network approval. The referencing of good nodes is usually based on the fact that the tip is not created by a lazy node, so it did participate in the approval of newer transactions.

Prof. Serguei Popov writes:

In order to issue a transaction, a node does the following:

• The node chooses two other transactions to approve according to an algorithm.
In general, these two transactions may coincide.

• The node checks if the two transactions are not conflicting, and does not approve
conflicting transactions.

• For a node to issue a valid transaction, the node must solve a cryptographic
puzzle similar to those in the Bitcoin blockchain. This is achieved by finding a
nonce such that the hash of that nonce concatenated with some data from the
approved transaction has a particular form. In the case of the Bitcoin protocol,
the hash must have at least a predefined number of leading zeros.


The IOTA network is asynchronous. In general, nodes do not necessarily see the same set of transactions. 

The tangle may contain conflicting transactions. The nodes do not have to achieve
consensus on which valid transactions have the right to be in the ledger, meaning
all of them can be in the tangle. However, in the case where there are conflicting
transactions, the nodes need to decide which transactions will become orphaned. 

The confidence level of a transaction is decided by the tip-selection algorithm (MCRW), that selects transactions for confirmation according to their cumulative weight. If the cumulative weight is high, the tip is more likely to get chosen. Lazy tips: a node, that conducts a new transaction and avoids doing too much pow via choosing older transactions for approval. This lazy node creates a lazy tip, that is less likely to be chosen because the low cumulative weight leads to a slow or non-selection of the MCRW tip-selection algorithm.

2 Weights and more

The following part in the whitepaper describes the weight of a transaction and other indicators.

The weight of a transaction is proportional to the amount of work that the issuing node invested into it.

The weight is attached to a transaction via positive integer, which means a whole number that can be read when reading the information of the transaction and the bundle it’s in.

In general, the idea is that a transaction with a larger weight is more “important” than a transaction with a smaller weight.

The basic idea of the weight is to give transactions an indicator if they are valid and coming from a “good node”.

It’s practically impossible to generate “an abundance of transactions with acceptable weights in a short period of time“.


  • Weight:
    The weight of a transaction is proportional to the amount of work that the issuing node invested into it
  • Cumulative weight:
    The own weight of a particular transaction plus the sum of own weights of all transactions that directly or indirectly approve this transaction.
  • Tips:
    Unapproved transactions in the tangle graph.
  • Height:
    The length of the longest oriented path to the genesis.
  • Depth:
    The length of the longest reverse-oriented path to some tip.
  • Score:
    The score of a transaction is the sum of own weights of all transactions approved by this transaction plus the own weight of the transaction itself.

For a thorough understanding of these indices, I recommend reading section 2 of the whitepaper, as it describes the calculations of the cumulative weight. To draw a conclusion: The more pow is done by honest nodes, the higher the cumulative weight gets.

3 Stability of the system, and cutsets

Under the assumption that all devices in the Tangle network have the same computational power, the approval of tips is distinguished in 2 different scenarios:

Low load: the typical number of tips is small, and frequently becomes 1. This may happen when the flow of transactions is so small that it is not probable that several different transactions approve the same tip. Also, if the network latency is very low and devices compute fast, it is unlikely that many tips would appear. This even holds true in the case when the flow of transactions is reasonably large. Moreover, we have to assume that there are no attackers that try to artificially inflate the number of tips.

High load:  the typical number of tips is large. This may happen when the flow of transactions is large, and computational delays together with network latency make it likely that several different transactions approve the same tip.

As written in the whitepaper, there is no clear borderline between those two scenarios, just an informal distinction in order to showcase extreme cases.


Important indicators:

  • L(t):
    The total number of tips in the system at time t.
  • h:
    The average time a device needs to perform the PoW
  • λ:
    Poisson point process that can be applied to the large number of incoming transactions from roughly independent entities
  • r:
    Revealed tips. Tips that were attached to the tangle before time t − h (before the PoW was done)
  • λh:
    Hidden tips. Tips that were attached to the tangle simultaneously when the PoW was done. [t − h, t)

The stability of the total number of tips L(t) is the most important point to assess the rate of approval in both load-regimes.


  • We assume that the number of tips remains roughly stationary in time and is concentrated around a
    number L0 > 0
  • In the stationary regime, this mean number of chosen tips should be equal to 1


Approval in the low regime: 

The situation in the low load regime is relatively simple. The first approval happens on an average timescale of order λ^−1 since one of the first few incoming transactions will approve a given tip.

Approval in the high regime:

One may assume that the Poisson flows of approvals to different tips are independent and have an approximate rate of 2λ/L0. Therefore, the expected time for a transaction to receive its first approval is around L0/(2λ) ≈ 1.45h

However, it is worth noting that for more elaborate approval strategies, it may not be a good idea to passively wait a long time until a transaction is approved by the others. This is due to the fact that “better” tips will keep appearing and will be preferred for approval. Rather, in the case when a transaction is waiting for approval over a time interval much larger than L0/2λ, a good strategy would be to promote this latent transaction with an additional empty transaction. In other words, a node can issue an empty transaction that approves its previous transaction together with one of the “better” tips to increase the probability that the empty transaction receives approval. -> 

-> which can lead to an attack, described later.


1. We distinguish between two regimes, low load and high load (Figure 3).

2. There are only a few tips in the low load regime. A tip gets approved for the
first time in Θ(λ^−1) time units, where λ is the rate of the incoming flow of

3. In the high load regime, the typical number of tips depends on the tip approval
strategy employed by the new transaction.

4. If a transaction uses the strategy of approving two random tips, the typical
number of tips is given by (equation 1). It can be shown that this strategy is optimal
with respect to the typical number of tips. However, it is not practical to adopt
this strategy because it does not encourage approving tips.

5. More elaborate strategies are needed to handle attacks and other network issues.
A family of such strategies is discussed in Section 4.1 (of the Whitepaper).

6. The typical time for a tip to be approved is Θ(h) in the high load regime,
where h is the average computation/propagation time for a node. However, if
the first approval does not occur in the above time interval, it is a good idea
for the issuer and/or receiver to promote that transaction with an additional
empty transaction.

7. It can be observed that at any fixed time t the set of transactions that were tips at some moment
s ∈ [t, t + h(L_0, N)] typically constitutes a cutset. Any path from a transaction
issued at time t’ > t to the genesis must pass through this set. It is important
that the size of a new cutset in the tangle occasionally becomes small. One may then
use the small cutsets as checkpoints for possible DAG pruning and other tasks.

3.1 How fast does the cumulative weight typically grow?

The cumulative weight is the most important indicator, because most of attack vectors can be created around this indicator.

Tip approval is mostly based on the cumulative weight, therefore, one must understand the cumulative weight adaptation rate.

Prof. Popov writes in a footnote:

In fact, the author’s feeling is that the tip approval strategy is the most important ingredient for constructing a tangle-based cryptocurrency. It is there that many attack vectors are hiding. Also, since there is usually no way to enforce a particular tip approval strategy, it must be such that the nodes would voluntarily choose to follow it knowing that at least a good proportion of other nodes does so.

The growth of cumulative weight in:

Low regime:  After a transaction gets approved several times, its cumulative weight will grow with speed λ because all new transactions will indirectly reference this transaction.

High regime: In the case where the network is in the high load regime, an old transaction with a large cumulative weight will experience weight growth with speed λ because essentially all new transactions will indirectly reference it. Moreover, when the transaction is first added to the tangle it may have to wait for some time to be approved. In this time interval, the transaction’s cumulative weight behaves in a random fashion.


  • H(t):
    As the expected cumulative weight at time t (for simplicity, we start counting time at the moment when our transaction
    was revealed to the network, i.e., h time units after it was created).
  • K(t):
    As the expected number of tips that approve the transaction at time t.

The whitepaper offers complicated calculations at this point that lead to the cumulative weight adaptation rate of:


1. After a transaction gets approved multiple times in the low load regime, its
cumulative weight will grow with speed λw, where w is the mean weight of a
generic transaction.

2. In the high load regime, there are two distinct growth phases. First, a transaction’s
cumulative weight H(t) grows with increasing speed during the adaptation
period according to (equation 8 – in the whitepaper). After the adaptation period is over, the cumulative
weight grows with speed λw (Figure 4). In fact, for any reasonable strategy,
the cumulative weight will grow with this speed after the end of the adaptation
period because all incoming transactions will indirectly approve the transaction
of interest.

3. One can think of the adaptation period of a transaction as the time until most
of the current tips indirectly approve that transaction. The typical length of
the adaptation period is given by (equation 7 – in the whitepaper).

4 Possible attack scenarios

Outpacing attack/large weight attack:

1. An attacker sends a payment to a merchant and receives the goods after the
merchant decides the transaction has a sufficiently large cumulative weight.

2. The attacker issues a double-spending transaction.

3. The attacker uses their computing power to issue many small transactions that
approve the double-spending transaction, but do not approve the original transaction
that they sent to the merchant either directly or indirectly.

4. It is possible for the attacker to have a plethora of Sybil identities which are
not required to approve tips.

5. An alternative method to item 3 would be for the attacker to issue a big doublespending
transaction using all of their computing power. This transaction would have a very large own weight,
and would approve transactions prior to the legitimate transaction used to pay the merchant.

6. The attacker hopes that their dishonest subtangle outpaces the honest subtangle.
If this happens, the main tangle continues growing from the doublespending
transaction, and the legitimate branch with the original payment to
the merchant is orphaned (fig. 5)

7. From the above discussion (calculations in the whitepaper), it is important to recognize that the inequality λ > µ
should be true for the system to be secure. In other words, the input flow of “honest”
transactions should be large compared to the attacker’s computational power.
Otherwise, the estimate (equation 12) would be useless. This indicates the need for additional
security measures, such as checkpoints, during the early days of a tangle-based
system.  -> the Coordinator

8. When choosing a strategy for deciding which one of two conflicting transactions
is valid, one has to be careful when using cumulative weight as a decision metric.
This is due to the fact that cumulative weight can be subject to an attack similar
to the one described in Section 4.1, namely, the attacker may prepare a doublespending
transaction well in advance, build a secret subtangle referencing it, and
then broadcast that subtangle after the merchant accepts the legitimate transaction.
A better method for deciding between two conflicting transactions might be the one
described in the next section: run the tip selection algorithm and see which of the
two transactions is indirectly approved by the selected tip.

4.1 A parasite chain attack and a new tip selection algorithm

1. An attacker secretly builds a subtangle that
occasionally references the main tangle to gain a higher score. Note that the score
of honest tips is roughly the sum of all own weights in the main tangle, while the
score of the attacker’s tips also contains the sum of all own weights in the parasite
chain. Since network latency is not an issue for an attacker who builds a subtangle
alone27, they might be able to give more height to the parasite tips if they use a
computer that is sufficiently strong. Moreover, the attacker can artificially increase
their tip count at the moment of the attack by broadcasting many new transactions
that approve transactions that they issued earlier on the parasite chain (Figure 6).
This will give the attacker an advantage in the case where the honest nodes use some
selection strategy that involves a simple choice between available tips.

2. To defend against this attack style, we are going to use the fact that the main
tangle is supposed to have more active hashing power than the attacker. Therefore,
the main tangle is able to produce larger increases in cumulative weight for more
transactions than the attacker. The idea is to use a MCMC algorithm to select the
two tips to reference

3. It is easy to see why the MCMC selection algorithm will not select one of the attacker’s
tips with high probability. The reasoning is identical to the lazy tip scenario:
the sites on the parasite chain will have a cumulative weight that is much smaller
than the sites that they reference on the main tangle. Therefore, it is not probable
that the random walker will ever jump to the parasite chain unless it begins there,
and this event is not very probable either because the main tangle contains more

4. In any case, there is not a large incentive for the nodes to be selfish because possible gains
only amount to a slight decrease in confirmation time. This is inherently different
from other decentralized constructs, such as Bitcoin. The important fact is that nodes
do not have reasons to abandon the MCMC tip selection algorithm.

4.2 Splitting attack

1. Aviv Zohar suggested the following attack scheme against the proposed MCMC algorithm.
In the high-load regime, an attacker can try to split the tangle into two
branches and maintain the balance between them. This would allow both branches
to continue to grow. The attacker must place at least two conflicting transactions
at the beginning of the split to prevent an honest node from effectively joining the
branches by referencing them both simultaneously. Then, the attacker hopes that
roughly half of the network would contribute to each branch so that they would be
able to “compensate” for random fluctuations, even with a relatively small amount
of personal computing power. If this technique works, the attacker would be able to
spend the same funds on the two branches

2. To defend against such an attack, one needs to use a “sharp-threshold” rule that
makes it too hard to maintain the balance between the two branches. An example
of such a rule is selecting the longest chain on the Bitcoin network.

3. It is worth noting that the attacker’s task is very difficult because of network
synchronization issues: they may not be aware of a large number of recently issued

4. Another effective method for defending against a splitting attack
would be for a sufficiently powerful entity to instantaneously publish a large number
of transactions on one branch, thus rapidly changing the power balance and making
it difficult for the attacker to deal with this change. If the attacker manages to maintain
the split, the most recent transactions will only have around 50% confirmation
confidence (Section 1), and the branches will not grow. In this scenario, the “honest”
nodes may decide to start selectively giving their approval to the transactions that
occurred before the bifurcation, bypassing the opportunity to approve the conflicting
transactions on the split branches

5. One may consider other versions of the tip selection algorithm. For example, if
a node sees two big subtangles, then it chooses the one with a larger sum of own
weights before performing the MCMC tip selection algorithm outlined above.


Attack-scenario conclusion: 

1. We considered attack strategies for when an attacker tries to double-spend by
“outpacing” the system.
2. The “large weight” attack means that, in order to double-spend, the attacker
tries to give a very large weight to the double-spending transaction so that it
would outweigh the legitimate subtangle. This strategy would be a menace
to the network in the case where the allowed own weight is unbounded. As a
solution, we may limit the own weight of a transaction from above, or set it to
a constant value.
3. In the situation where the maximal own weight of a transaction is m, the best
attack strategy is to generate transactions with own weight m that reference
the double-spending transaction. When the input flow of “honest” transactions is
large enough compared to the attacker’s computational power, the probability
that the double-spending transaction has a larger cumulative weight can be
estimated using the formula (12) (see also examples below (equation 12)).
4. The attack method of building a “parasite chain” makes approval strategies
based on height or score obsolete since the attacker’s sites will have higher
values for these metrics when compared to the legitimate tangle. On the other
hand, the MCMC tip selection algorithm described in Section 4.1 seems to
provide protection against this kind of attack.
5. The MCMC tip selection algorithm also offers protection against the lazy nodes
as a bonus.

5 Resistance to quantum computations

As of today, one must check an average of 2^68 nonces to find a suitable hash that allows a
new block to be generated. It is known (see e.g. [source 15]) that a quantum computer would need
Θ(√N) operations to solve a problem that is analogous to the Bitcoin puzzle
stated above. This same problem would need Θ(N) operations on a classical computer.

Therefore, a quantum computer would be around √2^68 = 2^34 ≈ 17 billion times more
efficient at mining the Bitcoin blockchain than a classical computer. Also,
it is worth noting that if a blockchain does not increase its difficulty in response to
increased hashing power, there would be an increased rate of orphaned blocks.
For the same reason, a “large weight” attack would also be much more eefficient on
a quantum computer. However, capping the weight from above, as suggested
in Section 4, would effectively prevent a quantum computer attack as well. This is
evident in iota because the number of nonces that one needs to check in order to find
a suitable hash for issuing a transaction is not unreasonably large. On average, it is
around 3^8. The gain of efficiency for an “ideal” quantum computer would, therefore, be
of order 3^4 = 81, which is already quite acceptable.

More importantly, the algorithm used in the IOTA implementation is structured such that the time to find a nonce is not much
larger than the time needed for other tasks that are necessary to issue a
transaction. The latter part is much more resistant against quantum computing, and
therefore gives the tangle much more protection against an adversary with a quantum
computer when compared to the (Bitcoin) blockchain.

Questions about the whitepaper, its including math or special attack vectors can be discussed in the development slack under #tanglemath


Explaining Series: Fog Computing in the Internet of Things

Explaining Series: Fog Computing in the Internet of Things

Fog Computing -one of many new trendy terms that we see and read almost everywhere in this field.

What is it? -and how can IOTA enable the perfect fog-computing landscape, the IoT needs?

I give you a short explanation and good sources for a smooth heads-up.


This roundup is an experiment that aims for a better understanding of the greater picture. Some keywords before the actual article are meant as an information-index.

  • Internet of Things (IoT)= Term from the MIT, Kevin Ashton, 1999
  • Fog-Computing = Term from Cisco
  • Fog = Decentralized/Distributed
  • Cloud = Centralized
  • Realm = IoT + IIOT, B2B, M2M, IoE, Smart grids, Smarthome, Smart cities, interconnected world
  • Problem = Unused Sensor-Data, Need for a solution of a distributed network, Costs of cloud-computing, time
  • Application = Evolving Markets, Quality-as-a-service, Machine Communication, Scada
  • IoT Systems = Basically two groups: 1) Identification Group (sensors, data gathering) 2) Computational group (processing, data storage)
  • Limitations until now: Cloud computing (centralized, far away from consumers and devices)  doesn’t fit the requirements of the IoT (distributed, in need of close storage, computational resources, instant processing), Bandwidth
  • Connection Types: WiFi, Bluetooth, ZigBee, 2G/3G/4G/5G, Radio, Z-Wave, 6LowPan, Thread, Wifi, Cellular, NFC, Sigfox, Neul, LoraWan

The IoT

The vision of the Internet of Things is still in the making.

With the latest development in this interconnected world, new markets are emerging and a variety of requirements are born.

Wearables, smartphones, domestic devices like smart-home solutions for an intelligent household demand an interconnectivity solution that has yet to come.

It’s no secret that almost every company is also working on solutions to make it happen: a world, where data is a more valuable resource than oil. If not today, then in the near future.

This leads to a point, where technical barriers of today hinder progress for tomorrow.

The IoT, a distributed network around the world is more than the Internet.

A mesh-net that is connected with every possible connection type. Where devices work in local clusters, it’s obvious that centralized components, sometimes on a different continent, don’t fit in the greater picture.

Sensors, cameras, smart devices often use ad-hoc solutions to function in their specific field, such as monitoring systems like Scada, that send valuable data to a nearby control center in order to optimize industrial processes.

What if these monitoring systems are working time-sensitive, but the current solutions are slow and on top of that centralized and unsecured. The productivity could be better, employees may work in a more dangerous environment and as a result: the company could face problems.

Connected facilities incentivize industry-espionage and hacks.

Distributed denial of service attacks is a phenomenon of the last few years, where certain malicious parties are attacking infrastructural points in the web, to cripple communication of some systems and special services.

Sometimes as a decoy for a hack, sometimes for political or activist-reasons.

Not rarely, mentionable down times create financial losses or the blockage of regional infrastructure’s hits, next to the target, also other companies that are located in the surrounding area.

A problem of the Internet, not necessarily of the IoT.

Due to the distributed mesh-net characteristics, the IoT is envisioned as a network, that is self-sufficient, in which case it can connect devices of the identification group via many ways, not only one.

An attack on central points is per definition impossible because there is no center in the IoT.

That leads to a natural resistance against DDoS and other downtimes.

Legacy systems vs. new systems

An additional issue of cloud-computing in the IoT would be the costs. Legacy system use to ignore huge amounts of data because there is neither storage no need for them.

New systems in the IoT, with smart solutions, rely on this data, but sending them into the cloud would go beyond the scope of the IoT. Too much information is generated, and real-time analysis, as well as centralized cloud-computing solutions, are conflictive with each other as uploading these huge amounts takes time and money -especially if the cloud-storage is thousands of miles away.

Fog computing, however, creates a bridge-solution for the identification group and computation group: It is about forwarding the computational power to the edge of the network, where data is generated and the results are needed.

The benefits of using Fog computing instead of legacy cloud systems are tremendous.

Varghese, Wang, et. al [2017]. come to the conclusion that. “For an online game use-case, we found that the average response time for a user is improved by 20% when using the edge of the network in comparison to using a cloud-only model. It was also observed that the volume of traffic between the edge and the cloud server is reduced by over 90% for the use-case.

This is just one use case that can be mirrored on many other settings.

In consumer markets, Quality of Service and Quality of Experience are important factors.

Another example would be the transparent customer. When a transparent customer enters a big supermarket, his views and interests could be analyzed within seconds.

Cameras can detect his interest in certain devices or components, and advertisements on monitors along his path can be adjusted to his specific needs. With old legacy systems impossible due to the long processing times between these cameras, a cloud, and computational resources, with fog computing, however, the data can be processed way faster and deliver the necessary information back to the customer, along his way in the mall.

To draw a simplified picture of the fog-landscape:

The distributed mesh-net is growing in height z, if you will, whereas decentralized and centralized networks are growing on the x and y axis. Shorter ways from the data collectors to the computational resources are the result of fog-computing.

Concerns can be addressed with IOTA

Whether it’s the data-integrity, optimization or protection of the in-house Research & Development data, companies look for a lasting solution.

When data is stored centralized, hackers usually use social engineering, or phishing attacks to get access to the data.

As centrally stored data would be collected all in once with this method, Fog computing would make it possible to store sensitive information in small packets, distributed, with different passwords/keys/seeds to access them.

IOTA  can deliver a unique solution here. A data-stream, bound to countless seeds, in a distributed network, secured with sophisticated algorithms. Not even quantum computing would be a threat to the hashes.

As you may already know, IOTA is a distributed ledger technology, that enables fee free transactions.

For data-transfer with fog-computing, you wouldn’t even need tokens, the only condition would be to confirm two other transactions before sending one of your own.

A rule that enables true scalability for a billion device network on a global scale.

With Masked Authenticated Messaging, IOTA has an additional option to send and process sensitive data.

Now, a really big hurdle in the IoT is the availability of dozens of connections and different norms.

When devices could be connected in a similar way, the usability would increase. A plethora of standards that are built for the IoT can lead to a fragmentation of the network, as companies want to stick to their standards, to support their product-line or roadmap.

If IOTA would be the standard settlement and data layer, which is free to use, the Internet of Things could be a barrier-less environment with true scalability and data-integrity.

Due to the value of collected data, new markets would come up, that aim for selling this information in real-time.

People would possibly be able to sell their consumer data, each time they enter a shop, with true nano payments.

If data would be collected in the fog, BigChain DB a scalable distributed database for all kinds of data could deliver the necessary infrastructure for customers, institutes, and companies.

A seamless solution for the IoT.

Fog computing is, therefore, the next necessary milestone in the field of the Internet of Everything and a vital part of the vision of IOTA.


Video of Dominik Schieners Presentation at the Tech Open Air 2017









Advertisment ad adsense adlogger