Category Archives: Web 2.0

Allgemeine Themen, aktuelle Trends, News

Skillmazing – Visualize your Skill Profile

This week our new platform Skillmazing starts into public beta. During the last four months we developed a new way of managing skills and visualizing skill profiles.

Until last year, we only managed them in a table format. This was a good start, but implied a lot of problems. Especially, as the table rapidly grew in size, it was hard to retrieve information instantly. Therefore, we developed a new application called Skillmazing. The tool especially focuses on personal development and supports users in improving their skills. Since we think that the application can not only benfit our company, but everybody who is interested in developing their skills, we would like to share it with you.

What Skillmazing offers:

  • See what skills you have and what you would like to improve
  • Search for material and stuff you can do to improve your skills
  • Get your skills, goals and progress visualized
  • Save your visualizations as .svg or .png file
  • Search for like-minded people to improve your skills together
  • Share your profile on social plattforms or via email

As default, all your data is invisible for other users. So create your profile and make it visible for others to present your skills and attract like-minded people!

Start Now

Choose from nine different visualizations and create personal profile visualizations like this:

Skill Visualization Sunburst

Skill Visualization Village Chart

Skill Visualization Bar Chart Horizontal

Skill Visualization Badges

Skill Visualization House Chart

Skill Visualization Bar Chart Vertical

Skill Visualization Timeline

Skill Visualization Puzzle

To see more, visit my skill profile here.

As Skillmazing started into public beta this week, we are looking forward to your feedback! Enjoy!

push.conference 2013

This October we attended the push.conference 2013 at the congress hall in Munich. The conference was held on Friday and Saturday. Entering the conference we immediately noticed that this conference is something special. First of all, we got our entrance pass along with 8 stickers. We had to select 4 out of them to add to our pass showing our interests. This was a good starting point for conversations with other participants.

P1030336

Sunny weather and good vibes

Grabbing some coffee and pretzels, we headed highly motivated to the first talk at 11 o’clock. The introduction truly lived up to the topic of the conference – interaction design and user experience. We had to stand up and introduce ourself to the surrounding people. Darja Isaksson talked about the digital revolution from her own experience. She adviced the audience to team up and value impact more than pride.

The following talk ‘Multi & Cross-Device UX Concepts‘ got more into detail. Neil Calderwood explained that Mobile first isn’t the right path. He introduced us to the Inside-Out-Design, which takes into account the requirements for different devices instead of just extending the minimal mobile version to a desktop one.

During the lunch break (with sunshine and a variation of tasty wraps), we explored the booths of different student projects.

P1030371

Booth safari

After the break, Lucia Terrenghi gave a talk entitled ‘Designing technologies for the next billion people: challenges and opportunities’. She pointed out how to design for different cultures, especially in the developing world.

The push.conference organisators introduced the concept of Lightning Talks, which gives new talents the opportunity to present their work and inspirations. Kalle Kormann-Philipson talked about Lean UX, Markus Steinhauser presented his startup Testbirds and Franz Bruckhoff gave us insights on his career path from a developer to a UX designer. He also mentioned that as a UX designer you should also consider driving the users’ expectations in the right way.

Kevin Sweeney’s talk ‘The Unseen Experience: Putting Detail Into The Web‘ increased our awareness of the little details with big influence. For example, he showed how to raise loading performance by predicting the users’ next steps.

Friday’s last talk was given by Mike Lemmon. He talked about ‘Design Languages for Interactions’. We got a first glance on a tiny, little wearable camera called MEME taking party shots in an automatic capture mode. Similar to polaroids, the pictures are shown instantly after the shot.

The second day started with Elliot Woods talking about ‘Digital Light as a Semi-Material‘. Followed by a talk by Wesley Grubbs about ‘Experiencing Data’ who made the point that human beings don’t think in numbers and bar charts, but in images. Therefore, it is important to tell a story with your data.

Sebastian Oschatz delighted us with his talk ‘Untangling Rectangles’ showing five strategies how to get other formats than rectangles.

On the second day there were Lightning Talks as well. Julia Laub from onformative talked about data art and how it differs from infographics. She emphasized that adding data doesn’t necessarily add meaning. The CEO of LAB BINAER talked about his experiments with generative design and shared with us his philosophy: ‘creative design is not a job, it is a lifestyle’. Industrial designers from LUNAR Europe showed examples of their work and how they also consider the inactive state of a product. Jochen Leinberger and Roman Stefan Grasy presented their book ‘Prototyping Interfaces: Interaktives Skizzieren mit vvvv’. They pointed out how important it is to inspire the people in an early phase.

Happy attendees

Happy attendees

After the last break, Mariana Santos inspired us with her talk about ‘Visual Storytelling for the News‘. She told us about her personal background and digital journalism at The Guardian. She adviced us to keep reinventing ourselves and to fail fast and succeed soon. Her positive attitude and the insights about her work for the Olympic Games in London 2012 charmed and amused the audience equally.

The last talk was given by Marcus Field about ‘Forms of Inquiry’. He pointed out his experiences with generative art and how customers feel about it.

Summarizing the lessons learned from the push.conference 2013:

  • take some risks
  • take responsibility about changes
  • don’t bend yourself to meet unrealistic customers’ requirements
  • design and business development must go hand in hand
  • your brand is a promise
  • you can achieve anything if you know the process

The push.conference 2013 was a great experience for us and we’re already looking forward to next year’s edition.

If you want to know more about our experience at the conference, please contact Marlene.Gottstein@comsysto.com or Elisabeth.Engel@comsysto.com!

Velocity Europe 2012 Roundup

This year’s Velocity Europe took place in London from October 2nd to October 4th. It had two main topics: “Web Performance” as well as “Operations and Culture”. The first day was a warm up day sporting longer more interactive sessions.

In summary the Web Performance talks were quite interesting. From learning some new tricks about Chrome Dev Tools from Google’s brilliant Ilya Grigorik to getting an outlook on HTTP 2.0 by Akamai’s Mark Nottingham. It seems that some current best practises on operating web applications might change in the future.

Being a DevOps guy myself the “Operations and Culture” talks were really outstanding! There were some really inspiring talks about how Operations has come a long way and how it will have to change and adapt in the future. John Allspaw pointed out in his talk about “Escalating Scenarios” how important the human factor is while dealing with high pressurce scenarios. He stressed the importance of a company culture that embraces failure and encourages people to be honest about their mistakes. Only through that a company can learn and get better at what it does.  Opscode’s CTO Christopher brown held a brilliant keynote about how Operations has matured as it’s own field and gave an outlook to moving from a craft towards “Operations Sciences”. “DevOps Patterns Distilled” by Patrick Debois (Jedi BVBA), John Willis (enStratus), Gene Kim (IT Revolution Press), Damon Edwards (DTO Solutions) was _the_ outstanding session! This new DevOps “Gang of Four” created some truly amazing groundwork and paved the way for DevOps to mature from a philosophical movement to a serious collection of practices!

Overall the quality of speakers was outstanding coming from companies like Google, Facebook, Akamai, Opscode, Etsy. The only really disappointing talk was “How Draw Something Absorbed 50 Million New Users, in 50 Days, with Zero Downtime” which turned out to be mainly a sales talk from Couchbase :-(

If you didn’t have the chance to come to London this year be sure to check out the official website where you can also find all of the slides!

http://velocityconf.com/velocityeu2012/

CS Labs: Two And A Half Lean

Neulich bei den comSysto Labs. Diesmal wollten wir unsere, bei dem Lean Startup Machine Workshop in London gewonnenen Fähigkeiten vertiefen. Zeitgleich standen wir bei unserer ShoeOfTheDay App vor dem Problem, dass unsere User die App nicht annahmen.

Deshalb entschieden wir uns unsere ShoeOfTheDayApp mittels Lean Ansätzen in den nächsten 2,5 Tagen genauer unter die Lupe zu nehmen.

Wir sind übrigens Diana, Thien-Thanh, Tim und Stefan.

ShoeOfTheDay ist unsere Facebook Anwendung, die wir seit etwa einem Jahr entwickeln. Die Idee, die dahinter steckte, kam von unseren Entwicklern. Anwender können auf Facebook in einem Schuhschrank ihre Schuhe zur Schau stellen. Für jeden Wochentag ist ein eigener Slot vorhanden. Außerdem können die User sich mit den Schuhen ihrer Freunde auf Facebook messen:

Leider ist der Besucheransturm bisher ausgeblieben:

Deswegen entschieden wir uns als ein Validation Canvas zu erstellen:

In dem Canvas wird:

  • eine Kundengruppe angenommen
  • die ein bestimmtes Problem hat
  • und es eine Lösung für dieses Problem formuliert.

Für unsere App nahmen wir folgendes an:

  • Unsere Kundengruppe sind Frauen im Alter von 20 bis 35 Jahren
  • Die sich über Schuhe austauschen möchten
  • Und für die die Lösung unsere App sein könnte :-)

Danach versuchten wir Risiken zu finden, die das Produkt möglicherweise gefährden könnten:

  • Der User möchte seine Schuhe seinen Freunden präsentieren.
  • Dies macht er über eine Anwendung am PC
  • Oder er nutzt eine mobile App
  • Der User ist gewillt sich die Zeit dafür zu nehmen

Unser größtes Risiko war dabei, ob die Frauen der Zielgruppe überhaupt ihre Schuhe ihren Freundinnen präsentieren wollen. Dazu haben wir eine Umfrage an der LMU München durchgeführt. Unsere Umfrage beeihaltete folgende Fragen:

  • Wann hast du dir das letzte mal einen richtig tollen Schuh gekauft? (Ist Teilnehmer an Schuhen interessiert? Ins Gespärch kommen)
  • Und, was haben deine Freunde zu den Schuhen gesagt? (Hat Bedürfnis seine Schuhe zu präsentieren)
  • Bei welcher Gelegenheit hast du den Schuh präsentiert? (Welche Lösungen hat der Teilnehmer bereits gefunden)
  • Welche Social Medias/Anwendungen (facebook, Whatsapp) würdest du am ehesten dafür verwenden? (Konkurrenz?)

Diese Fragen wurden von der Zielgruppe folgendermaßen beantwortet:

  • Alle Teilnehmer haben in den letzen Monaten neue Schuhe gekauft.
  • Wenige Teilnehmer präsentierten die Schuhe nicht ihren Freundinnen
  • Die meisten Frauen haben die Schuhe ihren Freundinnen bei privaten Treffen gezeigt, viele benutzen aber auch mobile Anwendungen, um den Schuh zu präsentieren
  • Diese Frage wurde mit WhatsApp, facebook und Skype beantwortet.

Daraus haben wir folgende Schlussfolgerungen gezogen:

Frauen haben ein großes Bedürfnis, ihre neu erworbenen Schuhe  Freundinnen zu präsentieren.

Einige Frauen setzen dabei auf Skype und facebook, andere wiederum benutzen Whatsapp, um Fotos ihrer Schuhe an Freundinnen zu versenden. Damit ist das Interesse, auf spielerische Art und Weise Schuhe zu präsentieren, grundsätzlich vorhanden. Auf Grund dieser Tatsache stellten wir uns im Team die Frage, was mit unserer App nicht in Ordnung sein könnte.

Durch das eingebaute Tracking erfuhren wir, das die User nach kurzer Nutzungszeit die App wieder verlassen und das Interesse daran verlieren. Wir wollten nun herausfinden, woran das liegen könnte.

Dazu luden wir mehrere Personen ein, um einen Test mit der Anwendung durchzuführen. Dabei stellten sich folgende Probleme mit dem Umgang der App heraus:

  • Der gewünschte Schuh wird in der App nicht angeboten. In diesem Fall existiert keine Möglichkeit ein Bild von dem Schuh in der Anwendung zu speichern. Gerade dieser Punkt dürfte ein K.O. Kriterium für die Akzeptanz der App darstellen, denn wie soll die Frau den Schuh präsentieren, wenn er nicht in der Anwendung vorhanden ist?
  • Es ist keine Suche nach Kategorien (Pumps, Ballerinas, Sandalen) möglich.
  • Die Anwendung ist sehr kompliziert zu bedienen. Dazu gehört die elementare Funktionalität einen Schuh auszuwählen. Für den User öffnet sich, im ungüngstigsten Fall, die Seite eines Schuhanbieters und die App wird verlassen.

Einige dieser Probleme wurde im Rahmen des cS-Labs abgestellt.Jedoch war die Zeit zu knapp um alle störenden Punkte zu beseitigen.

Im Anschluss sollten weitere Akzeptanz- und Oberflächentests durchgeführt werden, um herauszufinden, wie die User auf die Verbesserungen reagieren und frühzeitig Feedback zu erhalten. Dazu können auch die eingebauten Tracking-Mechanismen angewendet werden. Doch diese Geschichte wird in einem nächsten Lab zu einem hoffentlich guten Ende geführt werden.

Big Data and Data Science – what’s really new?

Big Data is a hype. It’s also a buzz word. Maybe a trend? Down-to-earth people could say it’s just mass data called “big”. Although there are many very large data warehouses in the BI world, data science seems obsessed with handling “big data – when the size of the data itself becomes party of the problem.” For Gartner and Forrester even “big” is not enough anymore, they started using the term “extreme” and they are right – volume alone is not Big Data.

Big Data is data at extreme scale when it comes to Volume, Velocity, Variety and Variability according to Gartner. Since the word “big” overemphasizes Volume, “extreme” might be the more appropriate term. Anyway, “big” is there, is shorter and sounds better, so let’s stick to it. ;-) Big Data also fits better to big money, extreme money does sound strange, right? According to new study from Wikibon, Big Data pegs revenues at $5B in 2012, surging to more than $50B by 2017.

So what’s really new about Big Data? In order to find an answer we first have to ask ourselves: How come? What lead to this trend? Let’s have a look at some other important and interdependent trends:

“Software is eating the world” and the Internet Revolution
Two decades ago you needed a special training in order to use software systems. Consumers used their Office suites and the few websites out there were only an bunch of static HTML files. Enterprises had their software to support some specific business functions, mostly with relational storage and they just started to put this relational data to use.

The rise of modern Internet started a new trend where all of the technology required to transform industries through software finally works and can be widely delivered at global scale. Today consumers and businesses moved online where more than 2 billion people use the broadband internet and today’s internet is:
- easy to use and everywhere (pervasiveness)
- dynamic, complex and agile (variability)
- extremely large (volume)
- extremely quick (velocity)
- noisy (extracting the message is getting harder)
- vague and uncertain
- not well-structured and diverse (variety)
- not always consistent
- non-relational
- visual
while every single one of these attributes is getting more extreme.

The transformation of Web 1.0 static websites to Web 2.0 web applications is now continuing towards Web 3.0 or Semantic Web where data, their semantics and insights as well as actions derived from that data become the most important part of the internet service.

A Shift in Data
Is Big Data only about Web or Internet Data? Not necessarily, but WWW still is the main driver. Plus the new awareness for an old fact: unlike people, not all data is equal whereas the inequality is even growing. Many new consumer and enterprise apps create data footprints that are constantly growing larger and quicker in more different formats as well as getting more complex. So why treating all data equally? Why would you want to store and process data streams of RFID messages the same way as your business transaction data? Well, only if you have no choice.

Many people talk about unstructured data being Big Data. Thinking about the term “unstructured data” longer than a few seconds opens up following questions: What is data without structure? When does structure end? How can it be interpreted and analyzed?

The answers are: There is no data without structure. If there is absolutely no structure or context, it’s just noise and you can forget about analyzing it. Even a piece of text has a certain structure and context, therefore one can mine it in order to extract the semantics. What most people mean by “unstructured” is data coming from a “non-relational” source with varying structure. After 40 years of dealing with nice and tidy relational data in analytical environments the brave new world surely might seem a bit chaotic and unstructured. But it’s not, it’s just different.

NoSQL – new choice for Data Storage and Processing
In order to efficiently process this kind of data for generating insights and actions, a new set of data management and processing software has emerged. These software technologies are:
- mostly Open-Source and frequently JVM based
- excellent in scaling through massive parallelism on commodity computing capacity
- non-relational
- schemaless
- storing and processing all different kinds of data formats such as JSON, XML, Binary, Text, …
They represent the sofar missing alternative for many use cases such as (complex) event processing, operational intelligence, machine learning, real-time analytics, genetic algorithms, sentiment analysis, etc.

Traditional mass data storage and integration solutions in the domain of Data Warehousing and Business Intelligence are based on relational formats and batch processing running for years on large, expensive and poorly scalable enterprise editions of RDBMS and even more expensive enterprise hardware. As the history has shown many times, it is not always the idea or the use case searching for the right technology (as one would expect it to be), but also the new technology inspiring people when generating ideas and driving innovation.

Looking at the components of a data-driven or analytical application following technologies associated with the term “Big Data” have already taken a leading role:
MongoDB for Data Storage, Real-Time Processing and Operational Intelligence. JSON based, schema-less document oriented DBMS.
Apache Hadoop for ETL/Batch Processing implementing MapReduce algorithm for aggregation
R Project for Statistical Computing and Data Visualization

Hardware and High Performance Cloud Computing
All of the above technologies allow High Performance Computing by supporting high scalability on bunches of commodity hardware. As computing capacity is always getting cheaper and seemingly limitless through different “Cloud” offerings, we don’t have to ask ourselves “Do we really need this data” before storing it. Store first, analyze later is reality today, not only because of cheap hard disk, but also because we have the possibility to add additional computing capacity for a limited time once we want to run our analyses.

It is the combination of the above mentioned trends that sums up in a different way we look at data today. These trends surely depend on and affect each other, but explaining this would lead off the subject. Being a practical person, I would want to get more into details and describe an analytical platform based on the three leading technologies: MongoDB, Apache Hadoop and R. Not now and not here, so stay tuned…

Links

http://en.wikipedia.org/wiki/Big_data

http://wikibon.org/wiki/v/Big_Data_Market_Size_and_Vendor_Revenues

http://online.wsj.com/article/SB10001424053111903480904576512250915629460.html

http://www.forbes.com/sites/ciocentral/2012/06/06/seven-best-practices-for-revolutionizing-your-data/

http://www.forbehttp://www.itnext.in/content/volume-alone-not-big-data-gartner.htmls.com/sites/danwoods/2012/03/08/hilary-mason-what-is-a-data-scientist/

http://www.marketwatch.com/story/big-data-is-big-business-50b-market-by-2012-2012-02-22

http://data-virtualization.com/2011/05/23/gartner-and-forrester-%E2%80%9Cnearly%E2%80%9D-agree-on-extreme-big-data/

http://practicalanalytics.wordpress.com/2011/11/11/big-data-infographic-and-gartner-2012-top-10-strategic-tech-trends/

http://www.jaspersoft.com/bigdata#bigdata-middle-tab-5

http://www.datasciencecentral.com/profiles/blogs/5-big-data-startups-that-matter-platfora-datastax-visual-ly-domo-

http://www.thisisthegreenroom.com/2011/data-science-vs-business-intelligence/

http://tdwi.org/articles/2012/02/07/big-data-killed-data-modeling-star.aspx?utm_source=twitterfeed&utm_medium=twitter

http://blogs.wsj.com/tech-europe/2012/02/10/big-data-demands-new-skills/?mod=google_news_blog

http://www.thisisthegreenroom.com/2011/data-science-vs-business-intelligence/

http://radar.oreilly.com/2010/06/what-is-data-science.html

http://www.citoresearch.com/content/growing-your-own-data-scientists

Munich MongoDB User Group: First Meetup

You are invited to the First Meetup Munich MongoDB User Group!

Date: 6/28/2011
Time: Starting 7pm
Who: Brendan McAdams, 10gen Corp.
Subject: „A MongoDB Tour for the Experienced and Newbie Alike“
Location: Münchner Technologiezentrum, comSysto GmbH, Agnes-Pockels-Bogen 1, D – 80992 Munich
http://www.comsysto.com/
http://twitter.com/#!/comsysto

A Few Facts on MongoDB:
„MongoDB is an open source, document-oriented database designed with both scalability and developer agility in mind. Instead of storing your data in tables and rows as you would with a relational database, in MongoDB you store JSON-like documents with dynamic schemas. The goal of MongoDB is to bridge the gap between key-value stores (which are fast and scalable) and relational databases (which have rich functionality).
Using BSON (binary JSON), developers can easily map to modern object-oriented languages without a complicated ORM layer. This new data model simplifies coding significantly, and also improves performance by grouping relevant data together internally.
MongoDB was created by former DoubleClick Founder and CTO Dwight Merriman and former DoubleClick engineer and ShopWiki Founder and CTO Eliot Horowitz. They drew upon their experiences building large scale, high availability, robust systems to create a new kind of database. MongoDB maintains many of the great features of a relational database — like indexes and dynamic queries. But by changing the data model from relational to document-oriented, you gain many advantages, including greater agility through flexible schemas and easier horizontal scalability.“

Do you want to learn more about MongoDB? Then please register via
http://www.meetup.com/Munchen-MongoDB-User-Group/
or
https://www.xing.com/events/munich-mongodb-user-group-meetup-781984
and give us a visit! The number of participants is unfortunately limited to 50.

For any further information please contact Matija Gasparevic/office@comsysto.com/

Apache Wicket – Best practices

Apache Wicket erfreut sich immer steigender Popularität und findet mehr und mehr Einsatz in Projekten. Dank der Mächtigkeit von Wicket lassen sich viele Features einfach und schnell realisieren. Für die Umsetzung dieser Features gibt es viele Wege. Dieser Artikel bietet einige Kochrezepte zum richtigen, effizienten und nachhaltigen Umgang mit Apache Wicket.

Dieser Artikel richtet sich an Entwickler, die bereits erste Erfahrungen mit Apache Wicket gesammelt haben. Entwickler, die in Wicket-Welt einsteigen tun sich oftmals schwer, weil sie Entwicklungsmethoden aus der JSF- oder Struts-Welt adaptieren. Diese Frameworks setzen vorrangig auf prozedurale Programmierung. Wicket hingegen setzt massiv auf Objektorientierung. Also vergessen Sie die Struts und JSF-Patterns, sonst werden Sie nicht lange Freude an Wicket haben.

Continue reading

Apache Wicket Training von comSysto und jWeekend

comSysto und jWeekend laden zum ultimativen Apache Wicket Training im Münchner Technologiezentrum (MTZ). Lernen Sie in 2 Tagen anhand sorgfältig aufeinander abgestimmter Theorieteile und praktischer Beispiele mit uns, wie man Webanwendungen der nächsten Generation mit Hilfe des führenden Frontend-Frameworks entwirft und implementiert.

Beginn: Do, 11.11.2010, 09:00
Ende: Fr, 12.11.2010, 17:00

Ort:
comSysto GmbH (Münchner Technologiezentrum)
Agnes-Pockels-Bogen 1
80992 München

Kursgebühr pro Teilnehmer: EUR 800,-

Kursunterlagen und praktische Übungen sind in Englisch, unsere Trainer deutschsprachig.

Anmeldung und mehr Details über XING:

https://www.xing.com/events/apache-wicket-training-comsysto-jweekend-581458

Flyer-Download:

http://www.comsysto.com/flyer/ApacheWicket.pdf

Email-Anmeldung und Kontakt: office[at]comsysto.com.

Wir freuen uns auf Ihre Kontaktaufnahme!

Community-Firma

Swiss-Re spricht es klar aus: Das Unternehmen als eine real existierende Community braucht die elektronische Abbildung, um den Informationsfluß zwischen den Mitarbeitern im komplexen Marktgeschehen auf Hochtouren zu bringen.

http://www.cio.de/strategien/methoden/2235769/?qle=rssfeed_

Langfristiger Markterfolg und keine kurzfristigen ROI-Betrachtungen werden Swiss-Re Recht geben!