Who owns our past

This is a very old draft of a thought. Finally after 10 years it is time to publish much smaller version.

As Search engines dominate the digitization an old fear comes back to haunt. What will be the story that a future generation will hear or read. Will they ever get time and exposure to explore different points of view? Will set of points of view dominate the narration?

I have always wondered how historians tell the story in detail by looking at artefacts and what is left over physically. Sometimes they also reference  written stuff and interpret it to mean something. I have also wondered whether the person who wrote something did not wilfully change the narrative(bible to every other written word)? What about people who did not document anything? How many documents also record honestly everything that happened without bias of any kind.What about people where a new religion or new people erased everything from past. In digital world – search makes it frighteningly easy to obliterate your identity, information about community (news/views/existence).  Since the veracity of information is always suspect we need to better alternative or at least ensure national archives are physical and open up their versions of narrative.

Over last century newspapers have played an important role in dissemination of information and helping people formulate opinions. They do tend to focus on immediate, tactical. Long term investigations and viewpoints come out in magazines or obscure books or worse academic journals. Nobody has time to read through everything. Will future generation have enough information or they will get curated data?

It is in our interest for pushing search engines , social networks, aggregators, applications to share their ranking for relevance, not showing links, withdrawn due to dispute, withdrawn due to laws (government x).

Left Facebook, planning to leave LinkedIn too…just infitismely small statistic

It is quite sometime since I left FB, I keep connected with friends via phone calls or physical meeting. Other friends are checked on by better half’s account occasionally. When FB announced the “experiment” results  – that was it for me.

I am skeptical of news aggregators, search sites and now social networking sites. I
was personally appalled by folks pushing in exaggerated versions of themselves form
their work life.

Lack of transparency
I do not know why certain news items appear in google news and others never make
it. There is neither explanation nor transparency. Over time there will be
dominance of only one “viewpoint” depending on the ownership of these places. It
scares the hell out of me.

I do not get much job offers anyway on linkedIn – and their pulse is very skewed to
interests which do not align to mine. The flow of stories, updates too seems less
on freshness and more on “linked to” etc..I do not have reason to follow these. I
rarely have conversations here.

Only thing which works is Prismatic – I wished Twitter would buy them. They bring
in stuff I am interested in, allow me to explore at my pace rather than twitter -
where stuff is lost after that instant or predefined window controlled by Twitter.
I am hoping there is more of context – more of “permanency” rather than immediate
day or 2. I wished there was better search, right now lot of stuff locked into
individual networks. Twitter’s search has to improve. Please. Just don’t focus on twitter stream, there is bigger world outside.

And yes I am keeping my offline/online conversation tool email handy. I will also
scare people away by putting random stuff on the blog. :) .

Apache Flink – SQL support?

Nowadays the challenge is the moment you blink something new would have popped up in the Data processing world – either as a fullfledged backend or another middle layer in between.

Apache Flink promises to take advantage of “declarative” model but at present has java/scala api. One of the fastest way for any of these intermediaries to succeed is to adopt something which has been there for decades. The most successful DSL of our times is SQL – the way Apache Drill is doing via using optiq – is a great 1st step. Hopefully underlying store’s decade long work does not go waste.

It also ensure you do not accumulate karma against the demi-god DBA who does not look forward to learning new things anymore. He joined that world because there were few things to do. Storage layout, profiler, queries, indexes, backups-restores, HA/DR.. Kidding. Just kidding.

 

10 things I wished my datastore would do (updated Is DocumentDB my savior?)

We use datastores generally to ingest data and try to make some meaning out of it by means of reports and analytics. Over years we have had to make decisions in terms of adopting different stores for “different” workloads.

Simplest being the Analysis – where we offload to pre-aggregated values with either columnar or distributed engines to scaleout the volume of data. We have also seen rise of stores which allow storage of data which is friendly for range of data. Then we have some which allow very fast lookups, maturing to doing aggregations on run. We have also seen use of data-structure stores – the hash table inspired designs vs the ones which don sophisticated avatars (gossips, vector clocks, bloom filters, LSM trees).

That other store which pushed compute to storage is undergoing massive transformation for adopting streaming, regular oltp (hopefully) apart from its regular data reservoir image. Then we have the framework based plug and play systems doing all kind of sophisticated streaming and other wizardry.

Many of the stores require extensive knowledge about the internals of the store in terms of how data is laid out, techniques for using right data types, how data  should be queried, issues of availability and taking decisions which are generally “understandable” to the business stakeholders. When things go wrong – the tools differ in range from just log error to actual “path of the execution” of the query. At present there is lot of ceremony for thinking about the capacity management, issues around how data changes are logged and should be pushed to another location. This much of detail is great “permanent job guarantee” but does not add lot of value in long term for the business.

2014-22nd Aug Update – DocumentDB seems to take away most of the pain – http://azure.microsoft.com/en-us/documentation/services/documentdb/

  1. Take away my schema design issues as much as it can

What do I mean by it? Whether it is traditional relational databases or the new generation no-sql stores. One has to think through either ingestion pattern or the query pattern to design the store representation of entities. This by nature is productivity killer and creates impedance mismatch between storage and representation in application of the entities.

Update (2014-22nd Aug) – DocumentDB – need to test for good amount of data and query patterns but looks like – with auto-indexing, ssd we are on our way here.

  1. Take away my index planning issues

This is another of those areas where lot of heart burn takes place as lot of innards are exposed in terms of the implementation of the store. This if done completely automagically would be great-2 time-saver. Just look at the queries and either create required indexes or drop them. Lot of regression issues for performance are introduced as small changes start accumulating in the application and are introduced at database level.

Update (2014-22nd Aug) – DocumentDB does it automatically , has indexes on everything. It only requires me to drop what I do not need. Thank you.

  1. Make scale out/up easier

Again this is exposed to the end application designer in terms of what entities should be sharded vertically or horizontally. This ties back to 1 in terms of queries ingestion or query. This makes or breaks the application in terms of performance and has impact on evolution of the application.

Update (2014-22nd Aug) – DocumentDB makes it no brainer again. Scaleout is done in CU. Need to understand how the sharding is done.

  1. Make the “adoption” easier by using existing declarative mechanism for interaction. Today one has to choose the store’s way rather than good old DDL/DML which is at least 90% same across systems. This induces fatigue for ISVs and larger enterprises who look at cost of “migration back and forth”. Declarative mechanisms have this sense of lullaby to calm the mind and we indulge in scaleup first followed up scaleout (painful for the application).

Make sure majority of the clients are on par with each other. We may not need something immediately for a rust. But at least ensure php, java, .net native and derived languages have robust enough interfaces.

Make it easier to “extract” my data in case I need to move out. Yes I know this is the least likely option where resources will be spent. But it is super-essential and provides the trust for long term.

Lay out in simple terms roadmap – where you are moving so that I do not spend time on activities which will be part of the offering.

Lay out in simple terms where you have seen people having issues or wrong choices and share the workarounds. Transparency is the key. If the store is not good place for doing like latest “x/y” work – share that and we will move on.

Update (2014-22nd Aug) – DocumentDB provides SQL interface !

  1. Do not make choosing the hardware a career limiting move. We all know-stores like memory. But persistence is key  for trust. SSD/HDD, CPU/Core, Virtualization impact – way too much of moving choices to make. Make 70-90% scenarios simple to decide. I can understand some workloads require lot of memory or only memory – but do not present swarm of choices. Do not tie down to specific brands of storage or networking which we cannot live to see after few years.

In the hosted world – pricing has become crazier – Lay out in simple to understand terms how costing is done. In a way licensing by cores/cpu was great because I did not have think much and pretty much over-provisioned or did a performance test and moved on.

Update (2014-22nd Aug) – DocumentDB again simplifies the discussion, it is SSD backed and pricing is very straightforward – requests – not reads, not writes or indexed collection.

  1. Resolve HA /DR in reasonable manner. Provide simple guide to understand hosted vs host your own worlds. Share in clear manner how should the clients connect, failover. We understand Distributed systems are hard and if store supports distributed world – help us navigate the impact, choices in simple layman terms or something we are already aware of.

If there’s an impact in terms of consistency – please let us know. Some of us care more about it than others. Eventual is great but the day I have to say – waiting for logs to get applied so that reports are not “factual” is not something I am still gung-ho about.

Update (2014-22nd Aug) – DocumentDB – looks like in local DC it is highly available. Assuming cross DC DR is on radar. DocumentDB shares available consistency levels clearly.

  1. Share clearly how monitoring is done for the infrastructure in either hosted/host your own cases. Share a template for “monitor these always” and take these z actions – sort of literal rulebook which makes again makes adoption easier.

Update (2014-22nd Aug) – DocumentDB provides oob monitoring, need to see the template or the 2 things to monitor – I am guessing latency for operation in one and size is another. I need to think through the scaleout unit. I am sure more people push – we will be in better place.

  1. Share how data at rest, data in transport can be secured, audited in simple fashion. For the last piece – even if actions are tracked – we will have simple life.

Update (2014-22nd Aug) – DocumentDB – looks like admin/user permissions are separate. Data storage is still end developer responsibility.

  1. Share simple guide for operations, day to day maintenance – This will be a life saver in terms of x things to look out for, do backups, do checks. This is how to do HA, DR check, performance issue drilldown – normally part of the datahead’s responsibility. Do we look out for unbalanced usage of the environment? IS there some resource which is getting squeezed? What should we do – in those cases?

Update (2014-22nd Aug) – DocumentDB – looks like cases when you need older data because user deleted something inadvertently is something user can push for.

Points 1-4 make adoption easier and latter help in continued use.

Dentist appointment calender on a Paper

My wonderful Dentist has had this wonderful form of calendar for a decade. He has moved to acquire latest equipments for surgery, x-ray but refused to adopt a simple effective system like Practo(congratulations for mention in economist) which his other colleagues are using.

Dentist gave me many reasons

- easy for receptionist/him to take a pencil/eraser and modify

- use both sides of very small piece of paper for the day

- easily put up time/procedure – also providing inputs for inventory

- green – less power consumption than device/internet charges etc

- controls his own data – very crucial for him

HA, Resilient, green

HA, Resilient, green, cheap too.

 

 

 

AzureML – Zero to Hero talk at SQL User Group – Bangalore

Update – Presentation (2nd Aug 2014 – SQL UG meetup, Bangalore)

This weekend I have a slot – thanks to @vinodk_sql, @pinaldave, @banerjeeamit and @blakhani for speaking on AzureML. I will assume most of the folks are for “finding” out what is new and take them on a journey from that point of view. hopefully they will leave excited about the tool and its ease and become curious enough to take the journey.

We will cover what AzureML can do by way of examples, we will get basic idea about what is available out of box (data ingestion, model creation, validation and web publishing – request/response). We will cover at high level algorithms for various tasks and need for data cleansing/feature selection and available tools for the same. We will not go deep into R integration or tuning of algorithms(sweeping/active-online learning). We will sidestep into gory details of what each algorithm means but cover the metrics for evaluation which are important to see the gains of using the algorithm.

It will be a demo heavy session taking data from public sites.

Venue:
Microsoft Corporation, Signature Building,
Embassy Golf Links Business Park,
Intermediate Ring Road,
Domlur, Bangalore – 560071

Location and details are here.

And title of the talk was chosen by Vinod.

No Not DataScience just Data Analysis

Over last 6+ years we have worked with various folks who wanted to learn more from data. This has been more of learning for us

1. Subsidized items beneficiaries – This is very big initiative with potential of pilfering, multiple entitlements. We focused on multiple  entitlements with available digital information.
– missing addresses.
– straight forward same household address
– wrong/unverified addresses with missing documents
– same person name spelled differently and related person information
spelled slightly differently
– having a “presence” across multiple locations farther apart
– missing biometric information where it was required
– corrupted biometric data
– missing “supporting” documents

Most of the issues of dubious addresses/missing/questionable documents indicates
issues at various levels(acceptance, ingestion, approval).

2. Subsidized healthcare data
This enables people to take care of critical health issues in subsidized
fashion.
We found out lot of obvious data issues
– plastic surgery repeated for different body parts for same folks over
years
– people doing delivering kids in short period
– certain districts doing lot more claims overall for surgeries(u, burns,
additional stent
– stay in icu for neuro but medicines for something else
– stay for whipples(oncology surgery) of any kind, increased Mastectomy of any kind without district data showing increase. May be it is just a co-incidence.
– Ureteric Reimplantations, Paediatric Acute Intestinal Obstruction larger
than others.

 

3. Elector Data
Challenges here range from missing supporting data, duplicate information.
The duplicates or just findings were very interesting
– people living in temples (sadhus are apparently exempt), schools
– multiple families living across various parts of state (labor on move)
– people thinking multiple voter-id cards helps to take advantage of some
gov schemes like ration/subsidized food or just as backup in case one is
lost
– woman married to 4 people …(possible in certain tribal locations)
– people with various versions of name (first, name, family) at same
address with little variation of age thrown in too

4. Non performing assets in lending firms
This sort of bubbled up when the corebanking effort took place and lot of
database “constraints” had to be loosened up to enable uploading in some
places.
– This reflects in lot of accounts with very less substantiated documents
and them turning into NPAs over time.
– Specially bad for the co-operative agencies where governance is very
less.

This was the one case where we used simple classification/clustering
mechanisms to simplify our analysis.

5. Rental cab agency
This was unique in terms of “cost” control measures. One particular trip
always used to consume more fuel then compared to normal transport. It was
found cab drivers congregate outside the expensive parking to avoid paying
it and thus end up using more fuel to come in and pick up customers.
Certain locations/times always again always had bad feedback in terms of
response- reason being drivers located far away with cheaper/no parking
or having food/rest in cheaper location.

At times I would have loved to throw data to blackbox which could throw
back questions and beautiful answers. Honestly more time was spent in
getting data,cleaning, re-entering missing data – (surgery description diff
than type). Later on simple grouping/sum/avg/median (stats) kind of
exploration threw up lot of information that we found.