hi everyone I'm here with another AMA and this is part of our series coming to
you straight from inbound in Boston and I'm really excited right now to be
sitting down with Alex Birkett he is a growth marketer at HubSpot and for those
of you that watched my interview with David Khim you know a little bit more
about what growth marketing is and what a growth marketer is so we won't get
into that too much but he really focuses on HubSpot's freemium acquisition
and so for those of you out there SaaS companies this will be probably a
really great talk for you but also any of you that are really concerned about
conversion rate optimization or have hit that maturity level in your company that it's
really time I mean we can all use improved conversions but those of you really
thinking about conversion rate optimization he comes from a background
of also experimental design, data analysis, like myself, man after man heart
and also SEO so I have a list of a really good questions, the other
interesting thing about alex is that he travels six to seven months out of the
year so some of you may also see my interview of Justin Champion who also
lives on the road so i don't know is HubSpot going fully remote it seems like? It seems like that, I mean it's still a pretty small
percentage of the company but they're certainly more giving and more free than other companies that I've experienced
all right so um thank you for joining me Alex I'm gonna just
get into it so we kind of I mentioned this that kind of everyone can probably
use conversion optimization but how do conversion optimization strategies
differ from like say a startup and and an established or maturity company? I think
well okay so there's like a spectrum of companies and like sizes and whatnot in
what stage you're at matters a lot in terms of opportunity costs so let's put
like a really early stage start at the far end of that spectrum and that's
maybe you haven't found your product market fit you're still kind of
struggling with customer development, product development, you don't really
know where your traction channels are and at that point
conversion optimization has such a marginal return that the costs aren't worth it
so you're gonna have to put time and effort into like building tasks you make
it like a 5% return which doesn't mean much at that scale and then at the other
end of the floor extreme spectrum would be something like walmart.com where
they're a clear business model and make a ton of money and a 1% lift means the
world to them so it just fundamentally like how you approach conversion
optimization in terms of resource development matters a ton based on like
just the size of your traffic and conversions and a start-up is generally not gonna
have that much traffic or conversions and also the word optimized kind of implies
that there's something there that exists in the stable format so you need
something to optimize whether it's a channel or your website whatever else
and oftentimes you're making fundamental tweaks your business model at the start up stage so
it's it's less and it's kind of in the semantics of like what you
consider it startup and what you consider conversion optimization too cause I've
always looked at CRO as sort of a customer-centric data supported
model to make decisions so that includes like analytics and customer research
which obviously help you at to startup as well but when it comes to running
like mass scale A/B tests it's kind of off the table for startups and I think a
lot of advice is given out that doesn't take that into account where it's like
oh you should be testing that test this test everything but at a start up you
don't need to test your button colors it's very very like marginal. It's all
everyone thinks about, button colors. So I would definitely look into I don't know
developing a customer base, positioning your product in market you can win, kind
of those fundamental startup things and then with scale like your resources
shift and like it becomes much more important to be CRO it's a competitive
advantage in like a crowded marketplace and when you can eke out those small
wins consistently that compounds at times so it becomes much much more
beneficial for a large company. That's a brilliant answer. We talked a little bit about it
A/B testing who should maybe be doing it what stage but what do you
think of what else is like the most misunderstood aspect of A/B testing out
there? people look at it well okay they're sort of two and they are somewhat
the same so they look at it as a tactic you can kind of layer on like to CRO-ify a
landing page But like A/B testing to me is,
this is like a shout out to Matt Kershaw from Conductrics he's kind of like the
person who made me look at CRO this way but it's like a method by which you can
reduce uncertainty to a level where you're more comfortable making decisions
and there's a cost to that right like you've got a set up a test so there's
it's not like a free thing so I always looked at it as like a user research
method where you can validate decisions and do so in a way you're more confident
making decisions essentially so it's not there's no magic like okay let's just
test button test headlines let's CRO-ify this page and we're gonna get wins like
that's not really what it's about for me it's more about all right let's move
forward let's we can actually inspire or support more innovative and creative
ideas because there's not as much of a loss in terms of like like if you're if
you implement a losing idea in that sense if you're testing it you only have
like a four week period where it's a loser right instead of like forever and
then a year later you realize it didn't work
yeah so it's a way to mitigate risks and encourage innovative ideas and then I
guess this is related but I think bad advice given on A/B
testing and this is something Justin Rondo from digital marketer says you shouldn't
test everything there and I was saying there's a cost to it right so you got to
get developers and designers and like depending on the complexity of
the test it could cost weeks or months of the dev work and setting that up and
running it as an opportunity cost because you could be working on
something else and one of the biggest mistakes I see people make is running an
A/B test when they're going to implement the variation no matter what because
then I mean just look at it in terms of ROI right you're going to implement Variation B
so that's going to be the outcome and
all you have is a cost associated with it so your net negative ROI no
matter what you do so I think if you're being kind of intellectually honest with
yourself and you're going to implement your variation because you like it and
it's cool, don't run the test and a lot of times people will say like oh I want
to know you know by how much it wins or what the effect is but still
with statistics you're estimating those based on like a sample population so you
can't say just because just this is significant and you know p-value of 0.05
that it's like you know we've got a 5% win and this is confidently a 5% win
that's sort of an estimation based on that sample so yeah just feel confident
sometimes not running a test. This really leads well into our next question
and I think it's a pet peeve of mine because I have a lot of training in
statistics for example the way people report in marketing p-values and I think
there's a lot of confusion about what exactly that is or what does it
just because something's statistically significant
equating that was practical significance like you said and there's a
cost to these things so the question is with so much data available it seems like
everyone thinks they're a statistician I have qualms with the way that some
marketers report and apply statistics what are your thoughts on marketers
using statistics? Okay so there's a lot to that one, marketers are no worse on
average probably than anybody using statistics so that's one point. Product
managers should learn how to use statistics properly but also I think that
if we're going to like I think we should be running A/B tests obviously it's a
great way to validate decisions, mitigate risks, and encourage innovation, so to do
that we need to understand how to run a test and how to do so properly and
there's no barrier to entry that says a marketer can't do that so I think like
we should all kind of buckle up and like learn the fundamentals of running a
proper a a/b test and you can even do this on a basic level like I think there's
some really good checklists out there I just read one I want to say the author is a
data scientists at Pinterest or some company like that
but it was the basic things that you should always keep into account you know
run it for full business cycles, account for seasonality, calculate your
sample size ahead of time, don't call a test when it hits significance, yeah,
that's the most common mistake I see. Classic, classic. P-hacking happens all the time and the
thing is like these are issues that the scientific community has been dealing
with for a long time and and quote unquote knows they shouldn't do
some unethical scientists. There's the whole replication crisis going on right now. Exactly so and because there are some
scientists even you know who know how to use statistics but are manipulating the system.
Oh yeah I would say that's a really uncommon in marketing and
product management most of the time it's like, well it's mistakes by
ignorance which is probably worse because then you're more confident about
your decision and it may or may not be right I think like a lot of the time
I mean just learn how to like mitigate false positives and you're probably fine
because, what are we doing, if we're doing p-hacking where it's like
if you're calling a tested significance
whether it's like you know three days or you have a sample size of 150 or whatever
the case is if you're running a test for i mean 70% of tests are going to hit significance no matter.
And especially with a large enough sample size so if you had like website a
lot of website traffic the smallest effect is gonna become detected it
doesn't mean it's going to make a difference for your business. Well that's
the things I think the best-case scenario is that you're wasting a lot of
time and resources on setting those tests up and implementing variations
that are null like there's nothing no effect on the business but the worst
case is that you run a test poorly and the variation is worse and you still
implement it, cause a lot of the time this is kind of a nitty-gritty issue, but it's
like one-tailed verse two-tailed testing and if you really so like there's a case
to be made for like if a test is equal you can
implement variation B or like whatever your favorite is or whatever I don't know you
don't have to like you can pick your favorite it's like kind of a tie but if
you're running a one-tailed test it doesn't actually tell you the downside
and it doesn't tell you if it's worse than than the control right so you might
actually be hurting your business again and not knowing it. Are most softwares out
there doing one tail when people are using A/B testing tools? I have no idea, I actually don't know.
Hmm, I think I'm gonna do some digging. Yeah some do some don't, a lot of them let you choose yeah I think in
most cases like you should probably use a one tail it's I think it's probably
industry standard for a/b testing I'm actually not totally sure but yeah it's
really only when you're going to implement like a variation if there's
a tie because there's this gray area then and you might make a bad decision
But, I was talking about this earlier with a friends about the word
statistical significance in particular significance because it sounds like
importance right or meaningfulness right right so like we have a
So we have a statistically meaningful result but that's not what significance means
it's really like an arbitrary, not arbitrary, yeah arbitrary threshold right p-value .05 it's a cutoff where you're basically saying we're we're going to
take the risk that 5% of the time the results are essentially meaningless there's no
difference between the control and the variant but it doesn't mean that like it
necessarily won by you know X percent. Right well that's kind of where I see
the reporting people making this assumption that I can be 95% confident or
or they make like these weird statements and I think that it is this false sense
that it is definitely better, so this idea that you are or if you go to 99 right so say
you do a P of 0.01 for sure it's different you know
and so but there is a small chance and if it's a very expensive change or
optimization to implement then you are still running the risk that
but it's not true. Well people look at it as that binary like you were saying once
it hits that threshold we're confident this is the right decision and it's by
this percentage but and I think it also like outsources like the decision making
to the data because the data said it right you didn't say it you didn't
make that interpretation the data says it's true so like we were going with
this and if you set up the test poorly data could be telling you the wrong
thing and you're extra confident about it
I think sometimes like and this isn't always the case of ego or gut based
decisions but at least if you like I think that headlines better let's just
do it it could be wrong but there's some sort of intellectual honesty and saying
that whereas if we pit two and two together or two headlines together and
the data says that one's better but the test was to set up poorly you have sort of
undue confidence so you have like more confidence that you're right so. And you
may not evaluate like is it I mean we should out of experience like I
mean I think as an experienced marketer that there are things that you
know from time and that aren't always data based that sometimes you need to
trust those those intuitions or or even theories because they've been
established over time oh yeah I'm not just for you but for other marketers so
just cause you have one test tell you the opposite one time is that's not enough
this should just flip the whole industry right totally so you're talking about
like best practices and yeah there's some I think sometimes you know you've
been you know with some level of confidence right and if a test is going the
opposite direction then you should question it a little bit. Yeah you should always
be skeptical but also be skeptical that best practices are truly best practices
although yeah I guess if it's your starting point I think you can be fairly
confident that at least those are like expected or like customer
expectations are like what the industry sets forth, so at a certain level like if
you know have your shopping icon on the top right of an Ecommerce site it's
a best practice because people expect it to be there so you can be fairly
confident starting with those and then from there you're gonna be A/B testing and if
the result is truly surprising that's good on one hand but that should also
like trigger this is kind of red not red flag but it should
intrigue. You shouldn't just be like yes let's move on with it you should think about
digging deeper into like why that may be the case or if there was something wrong
there's a law, Twyman's Law, I think it's like if a number is
surprising it's probably wrong it's like a data science law I always think about
that anytime I see like a presentation and there's somewhat mind-blowing
number we won this test won by two hundred percent I immediately think I
don't know yeah be skeptical I think that's a really good point. Skeptical of
best practices and then skeptical also of data and making sure that you are
interpreting it correctly that you set up your test correctly and you had a
proper yeah you got a proper hypothesis to begin with you know for example when
people set them up and they've changed multiple things but they attribute it to
the thing they want to attribute the difference to. Or how about this I
think there's a good, what's that comic book one that does like data science one like something yeah you know
what I'm talking about it was like yeah like daily daily comics on like data science
stuff but they had one where it's like a scientist tests whether jelly beans cause
acne and it's like they test every single flavor and they finally find one
that's statistically significant so they're P-hacking essentially but it's
like we'll do that with A/B testing when we set up a test with six variations and we
don't use a multiple comparisons correction like Done-Its or something like
that and then we'll also do post hoc analysis where we dive into seven
different segments and it's like we find oh it's significant with like desktop
users, variation C is significant with desktop users and then we'll use
multiple like metrics to do that too, we're like okay so it wasn't
significant with conversion rate or revenue per visitor, the click-through rate and
then we're P hacking. And really at that point your
p-value should be split between all of those exactly tests and people are not doing that.
Some software will do that, you also like we can't set up a test where
there's so many different metrics that look at, you sort of have to have like
Ronny Kohavi calls it the overall evaluation criteria or north star metric
people call it different names but it's like you have to have that one that
really matters otherwise you're constantly going to be finding winners
and implementing things based on things like click-through rate and micro-conversions
like that. When it comes to user research which do you find provides the most
valuable insights qualitative or quantitative data, and why? Or maybe it depends?
I like a mixture of both and I like the answer depends, I think you want to match
the research tool to the problem that you have and you should lead with good questions so um, let me think about this here
Yeah so you also want to factor in the cost of getting that research always
look at this user research and a/b testing as like a certain reduction of
uncertainty to make a decision and to make that decision like you're asking a
business question right and you want a certain amount of information to make
that decision and it's always gonna be less than 100% certainty, but what's the
cost to acquire that and I think the easier you can get the information the
better whether it's qualitative or quantitative
so let's say look the easiest scenario would be if you want like churn data or
something like that you can get from your quantitative analytics like you
have a tracking set up in your site some sort of user measurement it's some sort
of robust analytic solution and you can either with a couple lines of sequel
find that information you're looking for let's say churn by acquisition channel
because you want to like optimize your marketing spend so you can go into if you use
Google Analytics or Amplitude or whatever your analytics tool you can
find that in a matter of minutes but then like you know why are you asking that
question you clearly want to do something with that information you
don't just want to like here know that Facebook is bringing in like less like
the churn rate for Facebook users is is lower that's a great piece of trivia but
it doesn't mean much so then you might switch to qualitative and say like well
why is that and why is a good indicator you're dealing with a qualitative question
so in that case you have a bunch of tools at your disposal so you may look at
user testing you may look at session replays tag by acquisition source you
may look at customer feedback so you put up a poll on your website and find like what
problems people are dealing with given what acquisition source they come
through and then you've got a whole list of sort of like possible solutions or
possible problem opportunity areas and that's a good place to start I mean this
is conversion optimization in a nutshell right like you're looking for
opportunities or problems you're finding out possible solutions and like the why
to them and then you kind of return to a quantitative data problem when you start
testing those solutions and you can prioritize them by the
impact and use and all that stuff and that's where we get into ICE framework
and PIE framework but there's different research methodologies that are more
conducive to different problems like I wouldn't walk in and well I
wouldn't even actually even within quantitative data right like if
you're testing out different solutions to different user experiences you don't
necessarily want historical data you want a controlled experiment so like A/B
testing is the right tool for that and not for like at your google analytics or actually let's say you've
got no traffic like sometimes in that case you might go with prototype and you
might do user testing on it or like a five-second test to get people's like
basic understanding of what your website does so there's different tools for
different different problems that you might deal with and depending on who you have
access to you know users are they, they you have traffic do you not you know if
you're a startup and no one's ever like seen your product or your your site
before then you're gonna have to do it a little more recruitment. And yeah or you
can mix together tools too I think I don't know if I have an example for this but like if you, so
like A/B testing I don't want to say it can be misleading but like there it's it's
telling information about a set of users during a set of time and it doesn't tell
you potential second or third order effects so like if you're tricking users
into doing something they don't want to do in a sort of dark pattern like your
data might tell you that's the best possible variation
but if you put up a feedback hole or like a user test and like people you
know unknowingly went into that the purchase or whatever right that's gonna
have maybe bad long-term effects. Yeah, yeah, exactly and that's when you're talking
about like some of those growth hacks can actually be very unethical or
in the end just piss users off or you know that customer never comes back yeah
so increases things like churn rate or whatever because yeah I made a purchase
and I didn't actually want that purchase yeah even though you put the button this big so of
course I clicked it because there was nothing else to click on the screen or
whatever yeah those things are really hard to get accurate data on so there's
a sort of I mean all of this is interpretation but if you have the
qualitative there and the quantitative you can make a more informed decision as
opposed to just having the quantitative and saying obviously variation B is the
best, that's the data we have but if you have a bunch of customer complaints
you should listen to those too and at least factor that in. And also I mean if you
think about something like an A/B test test right you only know did this version
or this version work right or you know be better in whatever and metric I decided
but you can't test things that aren't in front of them so you don't know maybe
what they would have liked better or what they really like thought of the
experience as those you could probably get some from qualitative data. Yeah well and you're
only dealing with it's not a good, well you're dealing with the best metric that you
have access to so like most of the time a business isn't really run on
conversions or like one-time purchases or like this sign ups you know it's
it's run on lifetime value and retention type stuff right we can't run
a test for like ten months or two years or whatever else. Ask someone
you know every couple weeks do you trust us, what do you think of us. Right exactly like you have to factor
in some sort of qualitative data and a lot of those decisions too and sometimes
I mean hopefully we have a good sort of editor or editorial where you're not
gonna let those tests go through that are like unethical or bad but sometimes
you will and you need a sort of mix just to get like a full holistic picture of
what you're doing with your business. So how do, speaking of
content or conversion rate optimization, content optimization how do
you suggest finding these opportunities? With content optimization in specific
you're looking for well with almost any opportunity you're looking for high
impact which is generally going to be high traffic well actually let's take a step
back, content optimization could mean a couple things and I know in most of our minds
it means either optimizing like the search rankings so like if you're on
page two or three nobody's going to page two or three to find your thing, but Google looks at
you as somewhat relevant to that search so you can try like build some links
or beef up the content in some way and I
think those are probably the best places
to start because it puts you on the map at least if it's for a relevant term
for your business if you build a customer feedback tool like HubSpot did and you have a
guide on customer feedback and ranking at page two I would put a ton of time
and effort into building that post up just because it's the thematically related
to your business probably important that people find you for a term that's
related to customer feedback. And has a chance of getting. It's got a chance
there's some indication that it's there it's it's ranked it's it's a been
indexed by Google you're just not strong enough on other factors that Google
seems to think other competitors are so that's normally links or the quality of
content or relevance or some sort of technical SEO title tags whatever but
there's this whole SEO field that can solve those problems
and the opportunity identification is oftentimes yeah qualitative and business related but then
also well I guess it could be just traffic related to if you see that a
search term has like 10k search volume and you're on page two well if you page
one you're gonna capture a large part of that and then the other side it's like
what's currently ranking but not bringing in conversions or desired
business results I think a blog is usually
conversion wise it's trying to get like email subscriptions or at least leads
through like E-books or webinars or whatever so you have to have Google Analytics or
whatever you can see all this information in HubSpot if you use a
marketing automation tool but if there's like tons and tons of traffic that
nobody's converting then you have maybe a relevance problem and you can think
about you know what your offer is on the page if for instance. Make sure you have
an offer on the page first of all. Oh yeah, yeah, yeah if you have like 100,000 visitors going to a blog post
and no offer you can start there. Yes you're wasting your time. That is a
super low hanging fruit absolutely so build an offer that's a good one
and then or yeah have like an email subscription sign up, that's important too,
have a place that people can like give you their information if they want to
and then they'll look into the relevance after that so it's like if you're
selling conversion rate optimization software yeah and you probably have a
bunch of topics on your blog you have things like A/B testing and user
experience design and a multitude of topics that are somewhat related at
least if you're thinking from CRO but somebody coming in to a blog post on let's say
like 10 UX design principles for home pages and your one offer across all of
your blog posts is the CR, ultimate guide to CRO, there's sort of a disconnect so
you can change the content on somewhat easily or just build that blog post into
a bigger ebook but if that offer is something like the ultimate guide to user
experience design, that's much more relevant offer so you can start to like
optimize opportunities like that and as far as identification yeah it's usually
just opportunity and then the ease so like how much how much
malleability do you think there is so like if you have a ton of traffic for
top Beyonce songs and you sell like marketing software there's almost like
you're not gonna get a ton of conversions relative relevant conversions.
Buzzfeed released an article and they talked about some problem behaviors in
terms of direct link buying and selling which I think maybe most marketers know
about that but maybe the general public doesn't know and basically you talked
about that these are proliferated when publications depend on a free
contributor model Forbes for example there's some other like big big
names out there that do this and you said there's no such thing as a free
lunch in content marketing so but at the end you said you know kind of like
you put the blame on the publications you leave it to the reader to be
skeptical right which and then but you don't call it the marketer to change
their behavior why? That's a fantastic question well the saying a free lunch comes from economics right
it's something that Friedman said I can't remember the context but
basically somebody's paying for it there's there's like nobody's
getting this content for free so the readers paying in most cases which sucks
it's like that's why you have to be skeptical and like you kind of like I
don't follow authors that are really good promote them and I kind of get tips
for readers right, but then on the other side it's like these publications
are the ones who are trying to get a free lunch and the marketers are really exploiting that
loophole because they know that if they give these publications cheap or free
content they can kind of bring money in through like selling links on the other
side right so they're offering something free to Forbes or whoever else is
publishing the content and then the marketer they're naturally incentivized
to exploit this loophole like knowing that it exists and knowing that you can get
away with building high high domain Authority links for basically free you'd
almost have to be like not a great marketer not to do it and that's why I don't
blame the marketer because it's like it's sort of like I don't know swimming
upstream I don't know one salmons gonna do it every once in a while but it's a
bad overall strategy it feels like because then you're incentivizing,
you're asking every marketer to go against our incentives to be ethical, or I don't
even know if it's an ethics issue per se but it's asking people to go against
their incentives and I think that if a problem like this is systemic it's not
something that just one publication is doing or it's just like a one-off
otherwise it would be easier to solve it's like oh just like flag Forbes and
Google like de-ranks them or something like that but it's it's something that's
the point of the pieces that BuzzFeed does this it's like I've seen BuzzFeed.
And BuzzFeed put up this article for example. That's why I like the piece when it was like calling up the marketers I was
like as much as I'd like to dish out on Neil Patel I don't think that he's
necessarily the bad actor here or at least like it's not in his core to fix
the problem and this is what like so what we're doing with content marketing
is is kind of a generalized version of journalism it's like it's like a more
like we're using it for business purposes but like we're putting content
out there and oftentimes readers don't distinguish between different editorial
standards they think in their heads that Forbes is good which is why
the problem is so bad because when you're on a company blog you sort of expect
you expect a you know a certain level of promotion or like not completely
objective content they're trying to sell something that incentive is. And
I think people think these are like legitimate like journalists that live by
like journalists laws that are writing these articles. They think it's similar to the New York
Times where they have an editorial board and article checks checked and fact checked
twice and we edited for grammar and everything I mean it's not true with these free
contributor models so you sort of as a reader under the illusion that its
objective good journalism and it's not. And I think if anything it's just really
important for the public to understand as much that you should question
things that you read in the way that you should question best
practices and you should question the results of your tests and all of that
bring a skeptical eye to everything. Yeah well I also think that it's cool that
like BuzzFeed published that piece I think it's cool they wrote it so the more I
guess people like the more sunlight thats shed on an issue that's the best cure
for these things so yeah trying to be transparent I guess at least. Exactly. We'll give them a point. But
leading into that so I'm a big reader and you've kind of talked about the
reader that's one way for them to assure the quality of their content not assure
but a better chance to get quality content is to read books because they're
more expensive to produce and therefore less likely to be bullshit and probably
have editors and you know editorial boards and all this associated so what books have
you read in the last year that have just like a real impact on you it could be
professionally or personally? Uh Skin in the Game Nassib Taleb so all of his
books are fantastic especially if you're in conversion optimization data he's a
big skeptic but that book in particular was really good I think it actually has
a lot to do with cheap content to a lot of these people don't actually have
consequences or Skin in the Game that publish publishing these things
so I think like the people the marketers who got called out in that BuzzFeed
piece I'm pretty sure BuzzFeed linked to them so like they benefited more so there's no actual skin in the
game there so you sort of look at incentive structures differently when you think like
okay what risk is this person taking by making this
there's prediction there's only upside for this person so like why should we listen to them he's making
10 predictions a day one of them's gonna be right we're gonna glorify that
prediction when it comes true so that yeah skin in the game was really cool.
One I just finished that I'm really dense but interesting was the strategy
paradox sort of the summary would be a strategy only succeeds when it's fully
committed to but that's actually the best chance for failure too is when you
fully commit to a failing strategy for one reason or another reason
but then like a lot of people a lot of companies will try to mitigate that by
basically having this wishy-washy middle ground mediocre strategy and that's the
most competitive sphere that's where all the companies are sort of competing away
their profits so yeah it talks about the strategy paradox and just how different
levels of org structure the higher up you go the farther out you should deal
with strategic flexibility and like the lower you go you shouldn't even care
about like any sort of like options and you should only worry about executing
and committing on that strategy. Oh that's really interesting. But I hope you guys all enjoyed what Alex had
to share with us if you want to know more about conversion rate
optimization, seo, you can always reach out me. If you like that I bet you like
this or check out this video and don't forget to subscribe
Không có nhận xét nào:
Đăng nhận xét