Data Structures in Windows 8 have changed, but the really cool thing is that the Modern UI allows the data structure classes to also have UX! Wow!
Let’s say you are running a data structures class, there are few students who are “engine designers” and they LOVE the hard problems, live to solve that next homework problem. Then there is everyone else who is wondering what the heck they have to take this class to do the database job that they will likely get when they get out of college. To them, this is just calculus with if statements.
Many students know that their data structures knowledge will likely not be used in the manner it is taught in your classroom. Not your fault, but things are changing. So what if you created some extra modules that could be included as extra credit or to get those low “B” or “C+” students motivated to do more?
My thought is that you could use the XAML and C++ to not only implement data structures but add the potential for the students to build apps (out of class) that could stimulate their thinking of how to make money and build their CV. Pretty major.
Let’s take a look at the MIT Algorithms and Data Structures classroom material (if you use this material, please make a contribution, I have a donation to this excellent program built into my budget and Microsoft will match dollar per dollar, maybe your company will too?)
Now what if your students could implement the homework and at the same time start moving toward producing a Windows 8 Application that could be entered into the Microsoft Store to make money off of advertisements or actual sales? That might make your classes a little more popular.
Here is a table that I am working on to map the items with the Windows 8/XAML/C++/Cloud to demonstrations. But I got to take a break here I have a massive headache and can’t think. Which you might think: How is that different then normal.
Introduction to Algorithms
Using Windows 8/WinRT
Cloud to demonstrate
Argue the correctness of algorithms using inductive proofs and loop invariants.
Analyze worst-case running times of algorithms using asymptotic analysis.
Compare the asymptotic behaviors of functions obtained by elementary composition of polynomials, exponentials, and logarithmic functions.
Describe the relative merits of worst-, average-, and best-case analysis.
Analyze average-case running times of algorithms whose running time is probabilistic. Employ indicator random variables and linearity of expectation to perform the analyses. Recite analyses of algorithms that employ this method of analysis.
Explain the basic properties of randomized algorithms and methods for analyzing them. Recite algorithms that employ randomization. Explain the difference between a randomized algorithm and an algorithm with probabilistic inputs.
Analyze algorithms using amortized analysis, when appropriate. Recite analyses of simple algorithms that employ this method of analysis. Describe different strategies for amortized analysis, including the accounting method and the potential method.
Describe the divide-and-conquer paradigm and explain when an algorithmic design situation calls for it. Recite algorithms that employ this paradigm. Synthesize divide-and-conquer algorithms. Derive and solve recurrences describing the performance of divide-and-conquer algorithms.
Describe the dynamic-programming paradigm and explain when an algorithmic design situation calls for it. Recite algorithms that employ this paradigm. Synthesize dynamic-programming algorithms, and analyze them.
Describe the greedy paradigm and explain when an algorithmic design situation calls for it. Recite algorithms that employ this paradigm. Synthesize greedy algorithms, and analyze them.
Explain the major algorithms for sorting. Recite the analyses of these algorithms and the design strategies that the algorithms embody. Synthesize algorithms that employ sorting as a subprocedure. Derive lower bounds on the running time of comparison-sorting algorithms, and explain how these bounds can be overcome.
Explain the major elementary data structures for implementing dynamic sets and the analyses of operations performed on them. Recite algorithms that employ data structures and how their performance depends on the choice of data structure. Synthesize new data structures by augmenting existing data structures. Synthesize algorithms that employ data structures as key components.
Explain the major graph algorithms and their analyses. Employ graphs to model engineering problems, when appropriate. Synthesize new graph algorithms and algorithms that employ graph computations as key components, and analyze them.
Demonstrate a familiarity with applied algorithmic settings - such as computational geometry, operations research, security and cryptography, parallel and distributed computing, operating systems, and computer architecture - by reciting several algorithms of importance to different fields.
Advanced Data Structures (haven’t started on the table):
Classic comparison-based data structures.
The area is still rich with open problems, such as whether there is a single best (dynamically optimal) binary search tree.
Dynamic graph problems.
In almost any network, a link's availability and speed are anything but a constant, which has led to a re-evaluation of the common understanding of graph problems: how to maintain essential information such as a minimum-weight spanning forest while the graph changes.
Integer data structures:
Beating the O(lg n) barrier in sorting and searching. If you haven't seen this before, beating O(lg n) may come as a surprise. If you have seen this before, you might think that it's about a bunch of messy bit tricks. In fact, it is about fundamental issues regarding information and communication. We hope to give a cleaner and more modern view than you might have seen before, including coverage of powerful lower bounds.
Geometric data structures:
Segment trees, range trees, partition trees, dynamic convex hull, etc. In particular, range queries have surprising equivalences to problems on trees.
Data structures for querying large collections of large strings
Think DNA sequences
Self-adjusting data structures
Persistent data structures and retroactive data structures.
Succinct data structures.
Optimizing space is essential as data size reaches new orders of magnitude (again think Google and DNA).Some data structures require no space beyond the raw data (carefully ordered) and still answer queries relatively quickly.
Data structures optimized for external memory, and cache-oblivious data structures.
Any problem (e.g., sorting, priority queues) is different when you're dealing with disk instead of main memory, or you care about cache performance.
Memory hierarchies have become important in practice because of the recent escalation in data size.