彩乐乐|网站

  • <blockquote id="aoa84"></blockquote>
  • <blockquote id="aoa84"><samp id="aoa84"></samp></blockquote>
  • <blockquote id="aoa84"></blockquote>
    <blockquote id="aoa84"></blockquote>
  • <blockquote id="aoa84"></blockquote>
  • Derived data

    The management of data that is derived, augmented, enhanced, adjusted, or cooked — as opposed to just the raw stuff.

    May 20, 2018

    Technology implications of political trends

    The tech industry has a broad range of political concerns. While I may complain that things have been a bit predictable in other respects, politics is having real and new(ish) technical consequences. In some cases, existing technology is clearly adequate to meet regulators’ and customers’ demands. Other needs look more like open research challenges.

    1. Privacy regulations will be very different in different countries or regions. For starters:

    All of these rules are subject to change based on:

    And so I believe: For any multinational organization that handles customer data, privacy/security requirements are likely to change constantly. Technology decisions need to reflect that reality.

    2. Data sovereignty/geo-compliance is a big deal. In fact, this is one area where the EU and authoritarian countries such as Russia formally agree. Each wants its citizens’ data to be stored locally, so as to ensure adherence to local privacy rules.

    For raw, granular data, that’s a straightforward — even if annoying — requirement to meet. But things get murkier for data that is aggregated or otherwise derived. Read more

    August 3, 2015

    Data messes

    A lot of what I hear and talk about boils down to “data is a mess”. Below is a very partial list of examples.

    To a first approximation, one would expect operational data to be rather clean. After all, it drives and/or records business transactions. So if something goes awry, the result can be lost money, disappointed customers, or worse, and those are outcomes to be strenuously avoided. Up to a point, that’s indeed true, at least at businesses large enough to be properly automated. (Unlike, for example — ?? — mine.)

    Even so, operational data has some canonical problems. First, it could be inaccurate; somebody can just misspell or otherwise botch an entry. Further, there are multiple ways data can be unreachable, typically because it’s:

    Inconsistency can take multiple forms, including:? Read more

    September 29, 2013

    ClearStory, Spark, and Storm

    ClearStory Data is:

    I think I can do an interesting post about ClearStory while tap-dancing around the still-secret stuff, so let’s dive in.

    ClearStory:

    To a first approximation, ClearStory ingests data in a system built on Storm (code name: Stormy), dumps it into HDFS, and then operates on it in a system built on Spark (code name: Sparky). Along the way there’s a lot of interaction with another big part of the system, a metadata catalog with no code name I know of. Or as I keep it straight:

    Read more

    September 8, 2013

    Layering of database technology & DBMS with multiple DMLs

    Two subjects in one post, because they were too hard to separate from each other

    Any sufficiently complex software is developed in modules and subsystems. DBMS are no exception; the core trinity of parser, optimizer/planner, and execution engine merely starts the discussion. But increasingly, database technology is layered in a more fundamental way as well, to the extent that different parts of what would seem to be an integrated DBMS can sometimes be developed by separate vendors.

    Major examples of this trend — where by “major” I mean “spanning a lot of different vendors or projects” — include:

    Other examples on my mind include:

    And there are several others I hope to blog about soon, e.g. current-day PostgreSQL.

    In an overlapping trend, DBMS increasingly have multiple data manipulation APIs. Examples include:? Read more

    August 4, 2013

    Data model churn

    Perhaps we should remind ourselves of the many ways data models can be caused to churn. Here are some examples that are top-of-mind for me. They do overlap a lot — and the whole discussion overlaps with my post about schema complexity last January, and more generally with what I’ve written about dynamic schemas for the past several years..

    Just to confuse things further — some of these examples show the importance of RDBMS, while others highlight the relational model’s limitations.

    The old standbys

    Product and service changes. Simple changes to your product line many not require any changes to the databases recording their production and sale. More complex product changes, however, probably will.

    A big help in MCI’s rise in the 1980s was its new Friends and Family service offering. AT&T couldn’t respond quickly, because it couldn’t get the programming done, where by “programming” I mainly mean database integration and design. If all that was before your time, this link seems like a fairly contemporaneous case study.

    Organizational changes. A common source of hassle, especially around databases that support business intelligence or planning/budgeting, is organizational change. Kalido’s whole business was based on accommodating that, last I checked, as were a lot of BI consultants’. Read more

    February 13, 2013

    It’s hard to make data easy to analyze

    It’s hard to make data easy to analyze. While everybody seems to realize this — a few marketeers perhaps aside — some remarks might be useful even so.

    Many different technologies purport to make data easy, or easier, to an analyze; so many, in fact, that cataloguing them all is forbiddingly hard. Major claims, and some technologies that make them, include:

    *Complex event/stream processing terminology is always problematic.

    My thoughts on all this start:? Read more

    April 24, 2012

    Three quick notes about derived data

    I had one of “those” trips last week:

    So please pardon me if things are a bit disjointed …

    I’ve argued for a while that:

    Here are a few notes on the derived data trend. Read more

    February 6, 2012

    WibiData, derived data, and analytic schema flexibility

    My clients at Odiago, vendors of WibiData, have changed their company name simply to WibiData. Even better, they blogged with more detail as to how WibiData works, in what is essentially a follow-on to my original WibiData post last October. Among other virtues, WibiData turns out to be a poster child for my views on derived data and the corresponding schema evolution.

    Interesting quotes include:

    WibiData is designed to store … transactional data side-by-side with profile and other derived data attributes.

    … the ability to add new ad-hoc columns to a table enables more flexible analysis: output data that is the result of one analytic pipeline is stored adjacent to its input data, meaning that you can easily use this as input to second- or third-order derived data as well.

    schemas can vary over time; you can easily add a field to a record, or delete a field. … But even though you start collecting that new data, your existing analysis pipelines can treat records like they always did; programs that don’t yet know about the new cookie are still compatible with both the old records already collected, and the new records with the additional field. New programs fill in default values for old data recorded before a field was added, applying the new schema at read time.

    schemas for every column are stored in a data dictionary that matches column names with their schemas, as well as human-readable descriptions of the data.

    Interesting aspects of the post that don’t lend themselves as well to being excerpted include:

    September 6, 2011

    Derived data, progressive enhancement, and schema evolution

    The emphasis I’m putting on derived data is leading to a variety of questions, especially about how to tease apart several related concepts:

    So let’s dive in.? Read more

    July 18, 2011

    HBase is not broken

    It turns out that my impression that HBase is broken was unfounded, in at least two ways. The smaller is that something wrong with the HBase/Hadoop interface or Hadoop’s HBase support cannot necessarily be said to be wrong with HBase (especially since HBase is no longer a Hadoop subproject). The bigger reason is that, according to consensus, HBase has worked pretty well since the .90 release in January of this year.

    After Michael Stack of StumbleUpon beat me up for a while,* Omer Trajman of Cloudera was kind enough to walk me through HBase usage. He is informed largely by 18 Cloudera customers, plus a handful of other well-known HBase users such as Facebook, StumbleUpon, and Yahoo. Of the 18 Cloudera customers using HBase that Omer was thinking of, 15 are in HBase production, one is in HBase “early production”, one is still doing R&D in the area of HBase, and one is a classified government customer not providing such details. Read more

    Next Page →

    Feed: DBMS (database management system), DW (data warehousing), BI (business intelligence), and analytics technology Subscribe to the Monash Research feed via RSS or email:

    Login

    Search our blogs and white papers

    Monash Research blogs

    User consulting

    Building a short list? Refining your strategic plan? We can help.

    Vendor advisory

    We tell vendors what's happening -- and, more important, what they should do about it.

    Monash Research highlights

    Learn about white papers, webcasts, and blog highlights, by RSS or email.

  • <blockquote id="aoa84"></blockquote>
  • <blockquote id="aoa84"><samp id="aoa84"></samp></blockquote>
  • <blockquote id="aoa84"></blockquote>
    <blockquote id="aoa84"></blockquote>
  • <blockquote id="aoa84"></blockquote>
  • news

    Second-hand housing

    aviation

    aviation

    news

    Buddhism

    explore

    search for

    culture