Category Archives: stuff

Guest Blog – @Mattermark for Everyone by @JDcarlu

Original post: https://medium.com/@JDcarlu/mattermark-for-everyone-b9d92e6a8831

Reply to author on Twitter at @JDcarlu.

If you don’t know what Mattermark does: they “Research, prospect, and track the fastest growing private companies with deal intelligence.”

They focus on three main markets:

Private Market Investors: Venture capital, private equity & hedge funds
Lead Generation: Sales & business development professionals
Merger & acquisitions: Corporate development & investments bankers

With that said, let’s dive right in:
On Hunting

Mattermark payments plans are: $399/mo paid annual or $499 monthly.

On Christoph Janz categorization of the SaaS companies we would say that Mattermark is hunting for Deers & Rabbits. More close to hunt only Deers actually.

What I found very interesting about this blog post that relate the activity of “hunting animals” with sales, is how far from the reality of hunting they are. I agree that you should choose in advance what is going to be your prey (customer) and what strategy you will use, but the reality is that animals are not waiting for you to come (same with customers). When you hunt remember you are not the only one going for the same customers, and that they don’t behave as a shoplist that you find them and you choose which to pick.
When you are in survival mode you hunt what you found as fast as you can. If you have a comfortable market share (not fighting for survival) you will focus on hunting more deers instead of rabbits. The trap here is that we tend to approach our sales (hunt) strategy along with who we target as our customer.

But their is a difference in the attitude that you should take on approaching the hunt and the type of customers you are going for.

If you fall into the trap you will adapt and correlate your strategy with the type of customers you want. The trick is to unlearned this, and realize that not necessary the strategy will focus on hunting one type of animal, or that one type of animal (customer) will be hunt with the same strategy.

In a SaaS model, why shouldn’t you customize your product so it can be sold for different uses. In our example — Mattermark— the possible uses of the platform (product) is larger than todays strategy and payment model.
Minimum Use of Product

Everyone knows (I hope) by now what MVP means (Minimum Viable Product). Concept introduce by Frank Robinson but made famous by Eric Ries. As read in Ries book:

The Minimum Viable Product is that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum amount of effort and the least amount of development time. The minimum viable product lacks many features that may prove essential later on.

In our case Mattermark not only has built a MVP but at this moment is much more than that. It is a whole incredible sophisticated product. I have personally used it and can say its amazing. Now, for me, it was more than I needed. Why? Because there is a certain amount of information and use you can give to their platform, which is very limited compare to what it has to offer. Here is where the concept of Minimum Use of Product (MUP) comes to play:

The Minimum Use of Product is the simplest use a customer can give to your product and found value on it that they are willing to pay for.

Some examples on the Mattermark platform would be: One search for the correct angel investor, one sale lead to possible customers, one competitor that raised funding, one VC that has not yet invest in your niche, one startup that you can partner with.

So let’s try to figure out how the MUP will work when it comes to $$.
Monthly Plan — Annual Plan Vs Pay what you use

Now that we understand what the M.U.P means and stands for let’s see how this applies to our monetization strategy.

Let’s start by saying that the most surprising thing for me is that Mattermark has experienced and try other models but not applied directly to their main product. “Finding a model for sustainable paid content at Mattermark” published Danielle Morrill.

“2.5 hours and $1,000+ of pre-orders later, we might be onto something here… What if subscription fees sustained content operations and we could completely focus on turning out great content, even if it was just one piece each day?”

She is very smart. She saw the signs when publishing the reports. What she took as a sign of a “new startup journalism” is a new strategy model combined with a need for content. So how this is connected to their main product: the platform.

When you first use Mattermark you go into the free trial. Its usually a great idea. Give your customers a taste of your product, they try it, learn from it, get use to it, get addicted, can’t live without it, can’t imagine life without it, OK I will pay.

The entire user experience — from the first time the user visits your site to the moment he signs up for a free trial, through the onboarding and the exploration of the product and further on — needs to be completely frictionless -Christoph Janz

We need to understand the friction on the different stages (free trial). Let’s say there are two kinds of friction: one is psychological -what happens in your mind- and one is monetary -if it hurts to pay and you feel it in your pocket-.
The psychological friction has disappear with the use of the free trial but not the monetary one. This one appears when we need to log in with our credit card and then vanish on the month we are using the platform.

The friction comes back when you finish the month of trial, you know they have your credit card, and you remember their payment options:

$399/mo paid annual or $499 monthly.

Then you go “Wow”. I loved the product and really got value out of it but not enough to become a Deer. Let’s say I would pay between $10 -$100 for the use I got. I would be close to a mice. But there is no option to pay by hour, or by day, or by lead. This created the friction to skyrocket! But here is an idea:
What if you could pay $50 for one day and this money will go into creating content?

Just pay $50 for a day of use of Mattermark platform. Today’s daily revenue per day would be something like $400 / 30 = $13 or if you want in weekdays lets take 25 — $16. Or if you want in real hours let say 2 hours a day 6 days a week, on 4 weeks (48 hours), will give us $200 a day. But this is what VC, Angels, hedge funds can and will pay. If we are focusing on the founder of a startup we need to adapt to their reality (25% of what a VC firm can pay I think is more than a good deal). So why isn’t there a business model for pay what you use or pay by day?

To understand why (usually) SaaS companies don’t go this path we need to talk about Recurring Revenue.

Recurring revenue

They are looking for the frequency of that revenue stream, and whether or not it is recurring and easily predictable. — George Deeb on investors.

Startups (founders) have fall in love with recurring revenue. Why? Well in this case I’m going to blame investors. They have insisted that this is the right model to follow and that they will fund any business that follow this religion. Don’t get me wrong! I also believe in MRR (monthly) and ARR (annual recurring revenue) as measurements.

But falling in love with one metric can make you see only one strategy and can blind you from seeing other paths.

As a carriage horse that can only look forward and never to the sides.
I like Brad Feld take on ARR —related to valuation— which I will use to make a point of how the metric can confuse us.

A simple answer is “well — public SaaS companies are currently trading at 6x average multiples so we should get a 6x ARR valuation.” There are so many things wrong with this statement (including what’s the median valuation, how do it index against growth rates or market segment?, what is your liquidity discount for being able to trade in and out of the stock), but the really interesting dynamic is the relative value trap. What happens when public SaaS companies go up to an 8x average valuation? Or what happens when they go down to a 3x valuation? And, is multiple of revenue really the correct long term metric?

Recurring revenue is not new on business. It maybe new to tech (tech itself is pretty new on earth years) but you know for who is absolutely new? For customers (for john doe). Tech companies need to understand that you have to educate your clients on recurring revenue —instead of asking them or force them because its the only model you have— and the mutual benefits of this model.

So how do we do this? First we need to know there is a need for our product.
We all want your product

There is a real need for good data in the startup community.

Mattermark focus on VC, Angel, HR which is good when you want to go for a niche market. Own your market, monopolize it. Thats Mattermark position today. Very strong. But what about the rest of us.
I’m going to make a guess here: I believe Mattermark was build to help startups as much as to help investors. The vision is to help founders find the right investors, obtain future leads, get sophisticated info on your competitors, and build your own startup.

Conclusion

So if there is a need and there is a possible business model where I can pay by the day (higher than a monthly subscription) that can complement the actual recurring revenue model, why wouldn’t they go for it? Maybe they will.

PS:Let’s be clear that I have no relation with the company, apart from having tried their free trial, love their product, follow their founders on twitter, and believe they have a great future. The aim of this blog post is only to help.

Interesting app TBA soon: Shortapp.co ht @KikiSchirr

Please check out Shortapp.co and ask for opportunity to test the app. Summary from the team is below.

And sign-up for 1k-beta-testers email list for future similar announcements at:
https://lists.sonic.net/mailman/listinfo/1k-beta-testers

Thank you in advance.

Charles Jo 650.906.2600 Charles@StartupStudyGroup.com www.StartupStudyGroup.com Twitter @charlesjo

Begin forwarded message:

On Saturday, Jan 17, 2015 at 1:55 AM, Short Hello <hello>, wrote:

Hi Charles!
Here is some more information about Short. Please also see the attached “Review Guide” PDF.

List of Key Features:

– Filter articles by 5 or 10 minutes of reading time

– Connect your favorite apps like Pocket, Instapaper, Readability and more

– Use any situation even in Night Mode

– Always have your Reading Progress and Reading Time in sight

– Share your favorite reads with the iOS 8 share sheet

– Access your articles even when you’re offline

– Swipe left in the feed or Pull to archive & delete an Article

Best regards,
Alex


Alex Muench, Designer
@alexmuench

Short Review Guide.pdf

guest blog – No, Everyone in Management Is Not a Programmer

Guest blog by Adam Marx. Original at https://adammarxsmind.wordpress.com/2015/01/16/no-everyone-in-management-is-not-a-programmer/

No, Everyone in Management Is Not a Programmer
Posted on January 16, 2015

Just over a couple weeks ago on New Year’s Day, Techcrunch ran an article entitled “Everyone in Management Is a Programmer.”

Though I’m sure that the author, Adam Evans (co-founder and CTO of RelateIQ), had only the best intentions in trying to show programmers that any of them could cultivate the skills necessary to be effective managers, I think the way he’s attempting to go about illustrating his point is limiting when examined within the greater context of tech and business.

In targeting programmers and/or coders in the title of his article, Evans, whether he means to or not, excludes from his discussion those of us who might not have the technical abilities of programmers. While I agree with Evans’ attempt to encourage tech-savvy people to step out of their comfort zones and become successful managerial material, I disagree with his implied suggestion that one must have technical prowess to become a successful manager, and by extension, a founder, CEO, or any other executive within the tech field. The concept leaves out a whole slew of professionals within the tech space who do not consider themselves coders, but who still bring to the table skills that are just as important as programming knowledge.

I certainly understand Evans’ thought process and commend it: those who identify as programmers can certainly cultivate the skills to become effective managers and break out of their comfortable and familiar role as “the tech person.” But I think the ability to better oneself comes from drive and dedication derived from one’s inner character, not from the specific function which one performs at any particular time, whether it be coding or something else. While laudably encouraging programmers and coders to step outside of their comfort zone and become managers, Evans goes to the opposite extreme by suggesting that only programmers and coders can aspire to managerial positions.

It is teamwork that builds great companies. Great managers are those members of the team who lead others, who motivate the other team members and drive the enterprise forward. Yes, programmers and coders are important players on the team, but they are not the only players. Those involved in marketing, finance, public relations, design and layout, legal, and public speaking are also members of the team, and with the requisite leadership skills may realistically aspire to become great managers as well.

Perhaps one of the best recent examples of how the “coding persona” need not be the only one in a company’s top tiers is Ruben Harris’s article “Breaking Into Startups” which was posted a few days ago. The article received a lot of attention (and rightly so, in my opinion) as it describes Harris’s transition from a finance/banking background in Atlanta to a position at a tech startup in Silicon Valley. At this point, I’ve read Harris’s piece a few times already—it’s well-written and insightful, encouraging without becoming preachy. (Truly the mark of a great writer is when the reader of the piece feels as if the piece were written specifically for them). I think my personal most significant takeaway from the article is how Harris demonstrates that it was his desire and networking prowess (and the financial/marketing knowledge he knew he could bring to the table) that led to his successful introductions and subsequent job opportunities.

Evans’ thesis is flawed for a second reason; the belief that people can be programmed the same way as a computer code is flatly false. Concerning this thought process, firstly, no, they can’t—people are not computers precisely because they can be unpredictable and do not work within the same dynamics as a programmable machine and/or line of code. It is this unpredictability and ability for non-linear thinking that creates the very pool from which innovation and unique thoughts spring. To assume that this can be contained, measured, predicted, programmed—well it’s about as predictable as Ian Malcolm’s chaos theory-dinosaur point in Jurassic Park. [1]
Secondly, to attempt to “program” a person (whether that person is your customer, VC investor, employee, team member, etc.) does not reflect well on one as either a manager or a person. Rather than a productive quality, it more than likely comes across to other people as a need to resort to forms of manipulation in order to move one’s business ahead—not a realization I would want to have if I was an investor, employee, potential partner, etc.
Evans’ article takes a good step by encouraging programmers and coders to move into managerial positions. His appeal to coders I think carries with it a deep respect for those whose work he understands first-hand, and whom he seeks to benefit by sharing his own experience and knowledge. However, not everyone in management is a programmer, and people cannot be “programmed.” Successful managers—whether or not they are programmers—are those who find ways to motivate their peers (employees, teams, investors, customers, etc.) that come across as win-win situations, not as attempts at “programming” and predicting their actions in the future.
My respect to Evans for attempting to help his fellow programmers move out from their comfortable places behind the keyboard to take more active, managerial roles in their companies. I think his intentions will serve his team and company well. But I caution against alienating those who are not coders. Rule number one of any business: never seek to speak to one portion of your customers at the expense of alienating another. Those of us who are not coders are still here, and we are still integral in the equation. We build the same kinds of companies and assume the same levels of leadership; we just do it differently.

Thanks to Dad for reading early drafts of this essay.

Notes:

[1] Dr. Ian Malcolm, the mathematician character in Michael Crichton’s novel Jurassic Park (1990), was a characteristic cynic, though no more so than when he scoffed at the idea that the park’s creator, John Hammond, thought he would be able to “control” nature. Malcolm demonstrated his cynicism mathematically through explanations of fractal design and chaos theory as they pertained to nature and the growth of life.

languagengine – Blog – Type Checking in JavaScript

Type Checking in JavaScript

posted by Darryl on 16 Jan 2015

I’d like to follow up to my previous blog post on implementing a simply typed lambda calculus in JavaScript by demonstrating a more complete, more natural type system. If you haven’t read it already, you should do so. The previous type system was a little unnatural because it had no type inference mechanism, and so all of the relevant type information had to be provided in the programs themselves. This time, however, we’ll build a type checker that has inference, so we can write more natural programs.

One thing to note regarding type inference like this is that well-designed type programming languages, with properly defined type systems, can infer a lot in the way of types. One often hears that having to write a type signature is just a pain in the butt, and therefore strictly typed programming languages aren’t as pleasant to use as untyped or duck-typed ones. But with type inference, you frequently don’t need to write type signatures at all. In a language like Haskell, for instance, type signatures can very frequently be omitted.

The GitHub repo for this type checker can be found here.

To begin, let’s define the type theory we’d like to embody. We’ll again have just pair types and function types, as well as three primitive types FooBar, and Baz which will serve as place holders until we implement some other types in a later post. Again we’ll start with the type formation judgment A type that tells us when something is a type or not:

A type
======

-------- Foo Formation      -------- Bar Formation
Foo type                    Bar type


            -------- Baz Formation
            Baz type


A type    B type                  A type    B type
---------------- * Formation      ---------------- -> Formation
    A*B type                        A -> B type

The JavaScript implementation will be the same as before:

var Foo = { tag: "Foo" };
var Bar = { tag: "Bar" };
var Baz = { tag: "Baz" };

function prod(a,b) {
    return { tag: "*", left: a, right: b };
}

function arr(a,b) {
    return { tag: "->", arg: a, ret: b };
}

function judgment_type(a) {
  if ("Foo" == a.tag || "Bar" == a.tag || "Baz" == a.tag) {
      
      return true;
      
  } else if ("*" == a.tag) {
      
      return judgment_type(a.left) && judgment_type(a.right);
      
  } else if ("->" == a.tag) {
      
      return judgment_type(a.arg) && judgment_type(a.ret);
      
  } else {
      
      return false;
      
  }
}

As before, we’ll use a snoc linked-list for encoding variable contexts:

var empty = { tag: "<>" }

function snoc(g,x,a) {
    return { tag: ",:", rest: g, name: x, type: a };
}

We’ll have new intro and elim rules for the judgment G !- M : A which defines well-typed-ness. These will define programs that have less annotation for types, and which are therefore more natural and easier to use.

G !- M : A
==========

G !- M : A    G !- N : B
------------------------ * Intro
   G !- (M,N) : A*B

G !- P : A*B    G, x : A, y : B !- M : C
---------------------------------------- * Elim
     G !- split P as (x,y) in M : C

This time, our split elim, which does pattern matching for pairs, is not annotated for the types of x and y. If you look at the stuff above the inference line, you see Aand B, but these don’t appear below the line. If we wanted to simply type check, we’d need to invent these out of thin air, and there are a lot of ways to do that. So instead we can do inference on P, to get back it’s type, which had better be of the form A*B, and proceed from there. So, we’ll have the old checking function from before, but also a new inferring function:

function pair(m,n) {
  return { tag: "(,)", first: m, second: n };
}

function split(p, x, y, m) {
    return { tag: "split",
             pair: p,
             name_x: x, name_y: y,
             body: m };
}

// judgment_check will be modified shortly
function judgment_check(g, m, a) {
    
    if ("(,)" == m.tag && "*" == a.tag) {
        
        return judgment_check(g, m.first, a.left) &&
               judgment_check(g, m.second, a.right);
        
    } else if ("split" == m.tag) {
        
        var inferred_pair = judgment_infer(g, m);
        
        if (!inferred_pair || "*" != inferred_pair.tag) {
            return false;
        }
        
        return judgment_check(snoc(snoc(g, m.name_x, inferred_pair.left),
                                   m.name_y, inferred_pair.right),
                              m.body,
                              a);
    
    } else {
        
        return false;
        
    }
    
}

This much is basically the same as before, but we’ll also define now a function judgment_infer:

// judgment_infer will also be modified shortly
function judgment_infer(g, m) {
    
    if ("(,)" == m.tag) {
        
        var inferred_left = judgment_infer(g, m.first);
        var inferred_right = judgment_infer(g, m.second);
        
        if (!inferred_left || !inferred_right) {
            return null;
        }
        
        return prod(inferred_left, inferred_right);
        
    } else if ("split" == m.tag) {
        
        var inferred_pair = judgment_infer(g, m.pair);
        
        if (!inferred_pair || "*" != inferred_pair.tag) {
            return null;
        }
        
        return judgment_infer(snoc(snoc(g, m.name_x, inferred_pair.left),
                                   m.name_y, inferred_pair.right),
                              m.body);
        
    } else {
        
        return null;
        
    }
    
}

This new function has some obvious similarities to judgment_check, but instead of checking if a term has the appropriate type, it computes the correct type and returns that.

Our definitions for function types are very similar to what we had before as well:

 G, x : A !- M : B
-------------------- -> Intro
G !- \x:A.M : A -> B

G !- M : A -> B    G !- N : A
----------------------------- -> Elim
        G !- M N : B

Unlike before, however, we now tag lambdas with their argument type. This particular type inference algorithm isn’t smart enough to solve that for us (to do that would require a unification-based algorithm).

And the implementation of judgment_check is only slightly modified, including the syntax for function application. Again, tho, we use judgment_infer to recover the missing content of the elim rule.

function lam(x,a,m) {
    return { tag: "lam", name: x, arg_type: a, body: m };
}

function app(m,n) {
    return { tag: "app", fun: m, arg: n };
}

// the new version
function judgment_check(g, m, a) {
    
    ... else if ("lam" == m.tag && "->" == a.tag) {
        
        return type_equality(m.arg_type, a.arg) &&
               judgment_check(snoc(g, m.name, a.arg), m.body, a.ret);
        
    } else if ("app" == m.tag) {
        
        var inferred_arg = judgment_infer(g, m.arg);
        
        if (!inferred_arg) {
            return false;
        }
        
        return judgment_check(g, m.fun, arr(inferred_arg, a));
        
    } ...
    
}

And now the modification to judgment_infer:

// judgment_infer will also be modified shortly
function judgment_infer(g, m) {
    
    ... else if ("lam" == m.tag) {
        
        var inferred_body = judgment_infer(snoc(g, m.name, m.arg_type),
                                           m.body);
        
        if (!inferred_body) { return null; }
        
        return arr(m.arg_type, inferred_body);
        
    } else if ("app" == m.tag) {
        
        var inferred_fun = judgment_infer(g, m.fun);
        
        if (!inferred_fun || "->" != inferred_fun.tag ||
            !judgment_check(g, m.arg, inferred_fun.arg)) {
            
            return null;
        }
        
        return inferred_fun.ret;
        
    } ...
    
}

And of course we have the variable rule as before:

function v(n) {
    return { tag: "variable", name: n };
}

function judgment_check(g, m, a) {
    
    ... else if ("variable" == m.tag) {
        
        return type_equality(type_lookup(g, m.name), a);
        
    } ...
    
}

To infer the type of a variable, we’ll need to look it up in the context:

function type_lookup(g, n) {
    
    if ("<>" == g.tag) {
        
        return null;
        
    } else if (n == g.name) {
        
        return g.type
        
    } else {
        
        return type_lookup(g.rest, n);
        
    }
    
}

But with that, inference is easy:


function judgment_infer(g, m) {
    
    ... else if ("variable" == m.tag) {
        
        return type_lookup(g, m.name);
        
    } ...
    
}

Lastly, type equality, which is unchanged:

function type_equality(a,b) {
    
    if (("Foo" == a.tag && "Foo" == b.tag) ||
        ("Bar" == a.tag && "Bar" == b.tag) ||
        ("Baz" == a.tag && "Baz" == b.tag)) {
        
        return true;
        
    } else if ("*" == a.tag && "*" == b.tag) {
        
        return type_equality(a.left, b.left) &&
               type_equality(a.right, b.right);
        
    } else if ("->" == a.tag && "->" == b.tag) {
        
        return type_equality(a.arg, b.arg) &&
               type_equality(a.ret, b.ret);
        
    } else {
        
        return false;
        
    }
    
}

For convenience in inspecting what’s going on with these functions, it’ll help to have a function to convert types to strings, so let’s define one:

function show(a) {
    if ("Foo" == a.tag) { return "Foo"; }
    if ("Bar" == a.tag) { return "Bar"; }
    if ("Baz" == a.tag) { return "Baz"; }
    if ("*" == a.tag) {
        return "(" + show(a.left) + "*" + show(a.right) + ")";
    }
    if ("->" == a.tag) {
        return "(" + show(a.arg) + " -> " + show(a.ret) + ")";
    }
    return "Unknown";
}

If we now infer the types for some common functions, we’ll get exactly what we expect:

// \x : Foo. x
// infers to  Foo -> Foo
judgment_infer(empty, lam("x",Foo, v("x")));

// \x : Foo. \f : Foo -> Bar. f x
// infers to  Foo -> (Foo -> Bar) -> Bar
judgment_infer(empty, lam("x",Foo,
                        lam("f",arr(Foo,Bar),
                          app(v("f"), v("x")))));

// \p : Foo*Bar. split p as (x,y) in x
// infers to  Foo*Bar -> Foo
judgment_infer(empty, lam("p",prod(Foo,Bar),
                        split(v("p"),"x","y", v("x"))));

If you have comments or questions, get it touch. I’m @psygnisfive on Twitter, augur on freenode (in #languagengine and #haskell). Here’s the HN thread if you prefer that mode, and also the Reddit thread.

event – FoundersSpace – Big Data: Everything You Need to Know!

Please review and share.

Charles Jo 650.906.2600 Charles@StartupStudyGroup.com www.StartupStudyGroup.com Twitter @charlesjo

Begin forwarded message:

It would be fantastic if you could let the people know about our upcoming events:

Big Data: Everything You Need to Know!
February 18th at 6 pm – 9 pm in Palo Alto
https://bigdata2015.eventbrite.com
25% Discount = 25OFF

Startup Party: 300+ Entrepreneurs, Angels, VCs & More!
February 19th at 8 pm – 12 am in Silicon Valley
http://mixathon.eventbrite.com
25% Discount = 25OFF

@FoundersSpace
http://www.FoundersSpace.com

networking – for those interested – Welcome to the “1k-beta-testers” mailing list​

Please share with your friends!

We are trying to recruit 1000 beta testers for my network of startup founders… Please share with your friends

https://lists.sonic.net/mailman/listinfo/1k-beta-testers

Charles Jo 650.906.2600 Charles@StartupStudyGroup.com www.StartupStudyGroup.com Twitter @charlesjo

Begin forwarded message:

On Thursday, Jan 15, 2015 at 12:08 PM, 1k-beta-testers-request@lists.sonic.net <1k-beta-testers-request>, wrote:

Welcome to the 1k-beta-testers@lists.sonic.net mailing list!

To post to this list, send your message to:

1k-beta-testers@lists.sonic.net

General information about the mailing list is at:

https://lists.sonic.net/mailman/listinfo/1k-beta-testers

languagengine – Blog – NLP for AI Needs Structured Meaning

NLP for AI Needs Structured Meaning

posted by Darryl on 14 Jan 2015

In my previous blog post, I mentioned that new school NLP techniques for handling meaning can’t really cope with the complexities of actual meaning in language, and typically are pretty primitive as a result. In this blog post, I’m going to discuss a number of aspects of natural language semantics which pose a problem for primitive, unstructured approaches, all of which are relevant for bigger AI projects. Hopefully you’ll get a sense of why I think more structure is better, or at least you’ll have an idea of what to try to solve with new school techniques.

There are five major phenomena related to natural language meanings which I’m going to focus on:

– Linear Order
– Grouping
– Non-uniform Composition
– Inference Patterns
– Invisible Meanings

Don’t worry if you don’t know what these are, they’re just presented here so you know the route we’re going to take in this post.

Linear Order

Linear order matters. Not just to grammaticality, making “John saw Susan” grammatical in English but not “saw John Susan”, but also to meaning. Treating a sentence as a bag of words doesn’t suffice to capture meaning, because then the sentences “John saw Susan” and “Susan saw John” are the same. They have the exact same bag of words representation, so the exact same meaning. But of course they have very different meanings.

Even simple order information isn’t quite enough. Even tho “John” and “Susan” are in the same order in the sentences “John saw Susan” and “John was seen by Susan”, they mean different things because of the verbal morphology. Topicalization in English makes things even worse, because it has no morphological effects, so “JOHN Susan saw” doesn’t mean the same thing as “John saw Susan”. (If you don’t quite like this sentence, try “Susan I don’t know, but John I know”.)

If we have order together with some kind of constructional analysis (active vs. passive, topicalized vs. not) we’d be able to sort out some of this, perhaps mapping to a single meaning for all of these. This would constitute a richer class of semantic representations, and a considerably richer mapping from the syntactic form to the semantic form.

Grouping

Not only does linear order matter, but there are also grouping or association effects. That is to say, the meanings that correspond to different parts of a sentence “go together” in ways that seem to depend on how the parts of the sentence themselves go together. Multi-clause sentences are a perfect example of this.

Consider the sentence “John saw Susan before Michael met Stephen”. As a bag of words we’re going to miss out on a lot of what’s going on here, but even if we have some consideration of word order as described above, we probably don’t want to view this sentence as one big whole. That is to say, we probably don’t want to look at this as having four noun phrases — John, Susan, Michael, and Stephen — in some construction.

It’d be better to view it as two clauses, “John saw Susan” and “Michael met Stephen” combined with the word “before”, so that John and Susan “go with” the seeing, and Michael and Stephen “go with” the meeting, not vice versa. The fact that we can reverse the clauses and change only the temporal order, not the goings-with — “Michael met Stephen before John saw Susan” — emphasizes this.

Another example of how things associate in non-trivial ways is ambiguity. Consider “the man read the book on the table”. We can say this to describe two situations, one where we’re talking about a man sitting on table while reading a book, and another where the man is sitting in a chair reading a book, which happens to be currently (at the time of speech) on the table. The second interpretation is especially drawn out in this conversation:

Q: Which book did he read? A: The one on the table.

There is also no way to get a third interpretation where the guy who read to book is currently sitting on the table, even tho he read it while sitting in a chair. So this conversation does NOTcohere with that sentence:

Q: Who read the book? A: The guy on the table.

Any solution we give to this problem, so that these meanings are all distinguished, will require more structure than just a bag of words or even linear order.

Non-uniform Composition

Another aspect of meaning which requires structure is the way meanings compose to form larger meanings. Some meanings can go together to form larger meanings, while others can’t. For instance, while both dogs and ideas can be interesting, only dogs can be brown. A brown idea is kind of nonsense (or at the least, we don’t know how to judge when an idea is brown).

Even when meanings can compose, they sometimes compose differently. So while a brown dog and a brown elephant are brown in more or less the same way — it’s kind of conjunctive or intersective, in that a brown dog is both brown and a dog. On the other hand, a small dog and a small elephant are small in different ways. A small dog is on the whole pretty minute, but a small elephant isn’t small, it’s still about as big as some cars! When the meanings of words like “small” (called subsective adjectives) combine with noun meanings, they produce a meaning that is relative to the noun meaning — small for an elephant, not just small and an elephant.

Inference Patterns

Inferences are another place where structure reveals itself. One thing we do with meanings is reason about them, and figure out what other things are true in virtue of others being true. For example, if someone says to me, “John is tall and Susan is short”, then I can go tell someone else, “John is tall”. That is to say, the sorts of meanings we have support inferences of the form

if I know A and B then I know A

Other things, such as adjectives, have similar inferences, so if I know “John ate a Margherita pizza”, then I know “John ate a pizza”, because after all, a Margherita pizza is just a kind of pizza. If I negate the sentence, however, this inference is not valid: if I know “John didn’t eat a Margherita pizza”, that does not mean I know “John didn’t eat a pizza”. After all, he might have eaten a Hawaiian pizza instead. However, if I swap the sentences, it’s valid: if I know “John didn’t eat a pizza”, then I know “John didn’t eat a Margherita pizza”.

These entailment patterns aren’t just found with the word “not”, but with all sorts of other words. Quantifiers, such as “every”, “few”, and “no”, all impose entailment patterns, and sometimes not the same ones in all places. Consider the following examples, where  means “entails”, and means “does not entail”:

every dog barked ⊢ every brown dog barked every brown dog barked ⊬ every dog barked every dog barked ⊬ every dog barked loudly every dog barked loudly ⊢ every dog barked few dogs barked ⊬ few brown dogs barked few brown dogs barked ⊬ few dogs barked few dogs barked ⊢ few dogs barked loudly few dogs barked loudly ⊬ few dogs barked no dogs barked ⊢ no brown dogs barked no brown dogs barked ⊬ no dogs barked no dogs barked ⊢ no dogs barked loudly no dogs barked loudly ⊬ no dogs barked

The basic pattern for these sentences is “Q dog(s) barked”, either with or without “brown” and “loudly”. But these words have completely different patterns of entailment! Here’s a table summarizing the patterns and the data:

A = Q dog(s) barked B = Q brown dog(s) barked C = Q dog(s) barked loudly Q \ Direction | A ⊢ B | B ⊢ A | A ⊢ C | C ⊢ A --------------------------------------------------------- every | Y | N | N | Y few | N | N | Y | N no | Y | N | Y | N

Invisible Meanings

Lastly, some words or constructions seem to have more meaning than is obvious from just the obvious parts of the sentence. Consider the verb “copy”. It has a subject, the copier, and an object, the original being copied. There’s no way to express the duplicate object being created, however. You can’t say “John copied the paper the page” to mean he copied the paper and out from the copying machine came the copied page, or something. There’s just no way to use that verb and have a noun phrase in there that represents the duplicate.

But the duplicate is still crucially part of the meaning, and in fact can be referred to later on. In the absence of a previously mentioned duplicate, it’s weird to say “he put the duplicate in his book”. Which duplicate? You didn’t mention any duplicate. But if you first say “he copied the paper”, then you know exactly which duplicate, even tho there’s no phrase in that sentence that mentions it explicitly. It’s implicit to the verb meaning, but still accessible.

Wrap Up

Building an AI requires grasping meaning, and hopefully this post has helped to lay out some of the reasons why structure is necessary for meaning and therefore for NLP for AI. There are solutions to all of these that are pretty straight forward, but which are vastly richer in structure than the kind of things you find in a new school NLP approach. I think new school algorithms can be combined with these richer structures, tho, so the future looks bright. If you think there might be a pure new school solution that doesn’t require richer structure, let me know! I’d be interested in alternatives.

If you have comments or questions, get it touch. I’m @psygnisfive on Twitter, augur on freenode (in #languagengine and #haskell). Here’s the HN thread if you prefer that mode, and also the Reddit thread.

languagengine – Blog – The Conversational AI Problem Landscape by @psygnisfive

Original post: http://languagengine.co/blog/the-conversational-ai-problem-landscape/

 

The Conversational AI
Problem Landscape

posted by Darryl on 10 Jan 2015

In this blog post, I’m going to discuss some of the major problems that I think conversational AI faces in achieving truly human-like performance. The issues I have in mind are

– Syntax / Grammar
– Non-idiomatic Meaning
– Idiomatic Meaning
– Metaphors / Analogies
– Speaker’s Intentions and Goals

Some of them are, in my opinion, relatively easy to solve, at least in terms of building the necessary programs/tools (data and learning is a separate issue), but others, especially the last one, are substantially harder. But hopefully I can convince you that at least the first four are something we can seriously consider working on.

Syntax / Grammar

Syntax/grammar is the way words combine to form larger units like phrases, sentences, etc. A great deal is known about syntax from decades of scientific investigation, despite what you might conclude from just looking at popular NLP.

One major reason NLP doesn’t care much about scientific insights is that most NLP tasks don’t need to use syntax. Spam detection, for instance, or simple sentiment analysis, can be done without any consideration to syntax (tho sentiment analysis does seem to benefit from some primitive syntax).

Additionally, learning grammars (rather than hand-building them) is hard. CFGs are generally learned from pre-existing parsed corpuses built by linguists, like the Penn Treebank. As a result, NLPers tend to prefer simpler, but drastically less capable, regular grammars (in the form of Markov models/n-gram models/etc.), which require nothing but string inputs.

Non-idiomatic Meaning

By non-idiomatic meaning I simply mean the meaning that an expression has in virtue of the meanings of its parts and the way those parts are put together, often called compositional meaning. For example, “brown cow” means something approximately like “is brown” and “is a cow” at the same time. Something is a brown cow, non-idiomatically, if it’s brown, and if it’s a cow.

In a very real sense, non-idiomatic meaning is extremely well understood, far more than mainstream NLP would indicate, just like syntax. Higher-order logics can capture the compositional aspects tidily, while the core meanings of words/predicates can be pushed out to the non-linguistic systems (e.g. its a non-linguistic fact that the word “JPEG” classifies the things it classifies the way it does).

Idiomatic Meaning

Idiomatic meaning is just meaning that is not compositional in the above sense. So while “brown cow” non-idiomatically or compositionally means a thing which is both brown and a cow, idiomatically it’s the name of a kind of drink. Similarly, “kick the bucket” non-idiomatically/compositionally means just some act of punting some bucket, idiomatically it means to die.

Idioms are also pretty easy. Typically, meaning is assigned to syntactic structures as follows: words get assigned values, while local regions of a parse tree are given compositional rules (for instance, subject + verb phrase is interpreted by function application, applying the meaning of the verb phrase to the meaning of the subject).

Idioms are just another way of assigning meanings to a parse tree, e.g. the tree for “kick the bucket” is assigned a second meaning “die” along side its normal compositional meaning. There are relatively few commonly used idioms, so enumerating them in a language system isn’t too huge a task.

Metaphors / Analogies

Metaphorical or analogical language is considerable harder than idiomatic meaning. Unlike idioms, metaphors and analogies tend to be invented on the fly. There are certain conventional metaphors, especially conceptual metaphors for thinking, such as the change-of-state-is-motion metaphor, but we can invent and pick up new metaphors pretty easily just by encountering them.

The typical structure of metaphors/analogies is that we’re thinking about one domain of knowledge by way of another domain of knowledge. For instance, thinking of proving a theory as a kind of game, or programs as a kind of recipe. These work by identifying key overlaps in the features and structures of both domains, and then ignoring the differences. Good metaphors and analogies also have mappings back and forth associating the structure of the domains so that translation is possible.

Take, for instance, the linguistic metaphor “a river of umbrellas”, as one might describe that one scene from Minority Report outside the mall. The logic behind the metaphor is: the concept ‘river’ is linked to the word “river”, and rivers are long, narrow, and consist of many individual bits moving in roughly the same direction. This is also what the sidewalk looked like — long, narrow, with many individual bits moving in roughly the same direction, only instead of the bits being bits of water, they were people holding umbrellas. So we can think of this sidewalk full of people like we think of rivers, and therefore use the words for rivers to talk about the sidewalk and the people on it with their umbrellas.

The most likely scalable solution to metaphorical/analogical thinking is identifying the core components that are readily available. Geometric, spatial, kinematic, and numeric features tend to be very easy to pick out as features for making analogies. As for identifying the use of metaphors in language, typically metaphorical usages are highly atypical. The earlier metaphor of a river of hats is unusual in that rivers aren’t made of hats, and this mismatch expectation and usage can be used to cue a search for the metaphorical mapping that underlies the expression.

Fortunately, there is also quite a bit of work on conceptual metaphor that can be drawn upon, including a great deal of work in cognitive linguistics such as Lakoff and Johnson’s Metaphors we Live By, Lakoff’s Women, Fire, and Dangerous Things, and Fauconnier’s Mental Spaces, just to name a few classic texts.

Speaker’s Intentions and Goals

Possibly the hardest aspect of interacting conversationally is knowing how a speakers intentions and goals in the conversation guide the plausible moves the AI can make in a conversation. All sorts of Gricean principles become important, most of which can probably be seen as non-linguistic principles governing how humans interact more broadly.

Consider for example a situation where a person’s car runs out of gass on the side of the road, and a stranger walks by. The driver might say to the stranger, “do you know where I can get some gas?” and the passer-by might say “there’s a gas station just up the road a bit”. Now, any geek will know that the _literal_ answer to the drivers question is “yes” and that’s it — it was a yes/no question, after all. But the passer-by instead gives the location of a gas station. Why? Well probably a good explanation is that the passer-by can infer the speaker’s goal — getting some gas — and the information required to achieve that goal — the location of a gas station — and provides as much of that information as possible.

The driver’s question was the first step in getting to that goal, perhaps with a literal conversation going something like this:

Driver: Do you know what I can get some gas?
Passer-by: Yes.
Driver: Please tell me.
Passer-by: At the gas station up the road a bit.

But the passer-by can see this conversation play out, and immediately jumps to the crucial information — where the gas station is located.

This kind of interaction is especially hard because it requires, firstly, the ability to identify the speakers goals and intentions (in this case, getting gas), and figuring out appropriate responses. A cooperative principle — be helpful, help the person achieve their goals — is a good start for an AI (better than a principle of ‘avoid humans they suck’) — but something more is still necessary.

A possible solution is some inference tools, especially frame-based. The situation that the driver was in ought to be highly-biasing towards the driver’s goal being to get gas even without the driver saying a thing, because the functions and purposes of various things involved, plus typical failure modes, would strongly push towards getting gas as important. However this now requires large-scale world knowledge (tho projects like Cyc and OpenCog might be able to be relied-on?).

If you have comments or questions, get it touch. I’m @psygnisfive on Twitter, augur on freenode (in #languagengine and #haskell). Here’s the HN thread if you prefer that mode, and also the Reddit threads.