“
There are no miracle formulas for graceful aging that fit everyone. But, there are many good ideas that you can use to personalize your own plan of action. It is in your best interest to live a long and fruitful life. There are various obstacles as you age, and using these tips can help you face them with confidence. One method for handling age gracefully is to quit obsessing about data and measurements. Many people focus on their height, age and weight and can get easily stressed Flashes out. Let the doctor worry about the numbers and get on with your life. As you get older, fun is more important than numbers. Try putting more into your exercise routine. Growing older means that the benefits of regular exercise only get more valuable to you. Consider a brisk walk several times a week. Round out the week with two days of doing strength exercises. This combination of walking and strength exercises will help you maintain a strong healthy body and keep you feeling young. Whenever you can, spread peace and joy. Make others happy to help make yourself feel great. Happiness is free! It’s one of the best things you can share with others, and yourself, too. No matter where you are living, decorate it to make it feel like home. As you gain in years, you may find that you are living in a different place than you expected to live. If you have moved to a new home, use special things around your living space that make you feel welcome and comfortable. As the years pass, your home begins to become a place you consider a safe place. Days may seem longer and more challenging, so you need to make sure that your home is an oasis of comfort and personality you can retreat to. Living in your home should be a joy and comfort. Your life is a wonderful adventure that should be embraced at every given moment. By setting milestones, as you set for your children years ago, you can start feeling the way you did years ago. Make sure you eat healthy foods. The majority of what you eat should be plant based. Making healthy food choices can improve your overall health and provide you with enough energy to stay vitalized throughout your day. Replacing red meat with more fish is an easy way to improve the health of your heart. Meat can clog your arteries, along with contributing to heart disease and other ailments. Fish, on the other hand, breaks up the cholesterol, so it is a great addition to your diet. Look at your aging as an opportunity to re-engage in a favorite activity. Now that you have time, you can focus on the interests that you may have been required to set aside to make time for your family or career responsibilities. Hobbies can actively engage your mind and body. Consult your personal physician about supplements for anti-growing older that are going to work for you. The right combination of multivitamins, antioxidants and perhaps anti-inflammatory relief should be discussed. Taking these will allow you to have increased activity and less down time due to issues with aging. Include these in any daily plan you build. Always request a copy of your medical records. If you switch doctors, you’ll have them with you. In addition, if you have to visit a specialist, he or she will be able to have your records immediately without having to wait. Hopefully these tips will give you a more empowered approach to the aging process. The choice is yours and yours alone. Planning for your golden years is made even easier when you use the tips from this article. ” - Things You Need To Know About Aging http://bit.ly/1S6enLJ via Tumblr http://ift.tt/1MtNDYO
0 Comments
"Leigh Miller Yankee Stadium francis_leigh Some rights reserved A couple of months ago I wrote..."4/10/2016
“
Leigh Miller Yankee Stadium, francis_leigh, Some rights reserved A couple of months ago, I wrote about a Google patent that involved rewriting queries, titled Investigating Google RankBrain and Query Term Substitutions. Theres likely a lot more to how Googles RankBrain approach works, but I came across a patent that seems to be related to the patent I wrote about in that post, and thought it was worth sharing and starting a discussion about. The patent I wrote about in that post was Using concepts as contexts for query term substitutions. The title for this new patent was very similar to that one (Synonym identification based on categorical contexts), and the more recent patent was granted on December 1st of this year. The new patent starts off describing a scenario that is a good example of how it works. The inventors tell us: For example, learning that restaurants is a good synonym for food in the query [food in San Francisco] is relatively straightforward, because the volume of query traffic including the query term San Francisco is very large. For much smaller cities, such as Grey Bull, Wyo., the query stream may have never seen any supporting evidence for this synonym substitution. That both cities are entities that fit into the same category, that of Cities means that they could potentially be good synonyms for each other. Thats what the inventors of this patent tell us specifically, using the San Francisco and Grey Bull example: For example, if San Francisco and Grey Bull are both cities, and restaurants is a good synonym for food in queries about San Francisco, the synonym relationship may apply to queries related to Grey Bull as well. Thus, the category city may be considered a useful category when identifying synonyms for query expansion in circumstances such as this. So, we are told that the process involved in this patent is to identify categories from a knowledge base involving a number of entities where other entities within that same category could potentially be synonyms for each other in similar contexts. The process from the patent involves identifying those entities from a query stream, and identifying the category as one that they call a coherent category. The patent tells us that a coherent category is one in which a certain threshold of terms tend to co-occur in a query stream involving those entities. The patent tells us, for instance that a category that might include entities that are cities, villages, and towns might see a lot of co-occurring terms involving hotels and roads. If the number of co-occurring terms appearing in that query stream meet a certain threshold, it would be considered a coherent category, and the entities from the same categories could possibly then be used as synonyms for each other. The patent in question is: Synonym identification based on categorical contexts Invented by: Zachary A.Garrett, Takahiro Nakajima, Tasuku Oonishi Assignee: Google US Patent 9,201,945 Granted December 1, 2015 Filed: March 8, 2013 Abstract Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for training recognition canonical representations corresponding to named-entity phrases in a second natural language based on translating a set of allowable expressions with canonical representations from a first natural language, which may be generated by expanding a context-free grammar for the allowable expressions for the first natural language. Take Aways When I wrote about the query term substitution patent I refer to at the start of this post, I included a number of examples of queries that were re-written based upon some substitutions of query terms that might seem reasonable to a search engine looking at words that tended to show up, or co-occur, in a query stream involving those search terms. For instance, someone searching for [New York Yankees stadium] was likely searching for results that involved baseball since queries that included New York Yankees and stadium also often included the term baseball. That patent didnt use the term co-occur nor did it explain how a knowledge base might be used to substitute entities that might be in the same categories like this one does, but the idea that a shared context like entity categories can http://selnd.com/1qhuazP be used to trigger entity substitutions in a query is interesting. Its worth spending time with both patents and reading through each of them multiple times and thinking Expert SEO about how they are being used. Total Shares 187 ” - How Google Might Make Better Synonym Substitutions Using Knowledge Base Categories http://bit.ly/1qhuazS via Tumblr http://ift.tt/1UTFhfu
“
Its amazing to see how even in 2016 Web marketers continue to get (what I consider to be) some of the most basic concepts wrong. We have had several years to set the record straight but not everyone is getting the memo. So lets run down the list of mythical claims and ideas that, hopefully, will die a quick and well-deserved death in 2016. SEO MYTHS STILL CIRCULATING IN 2016The MythWhy Its WrongThe TruthGoogle measures user satisfactionGoogle has no clue about which of your visitors are satisfied with your content. They click on a listing in Google, find something, and then some of them go back to Google.People continue to confuse user engagement with user experience. User experience is what you put on the page. User engagement is what the visitor does on the page. Google cannot track what these people do on the vast majority of Web pages (because Google Analytics is only embedded on a fraction of Web pages). Furthermore, Google is not the only way people reach a page. And if your average page views per visit is greater than 2 Google isnt even driving most of the traffic to your pages. By believing this myth you are assuming that Google takes a fractional amount of data and dangerously extrapolates it to create rules for the entire Web. That is completely ridiculous. Google does not measure user satisfaction with anything other than its own search results, and that is absolutely, totally, and completely unrelated to the degree of user satisfaction with your Web content. Mobile search is replacing the desktopMobile search is mostly charting new territory in terms of query content.Desktop search has dipped somewhat over the past few years but remains strong. Users have learned to do things with their phones they never could have done with their laptops, desktops, and tower computers. Mobile search happens in a far different context than desktop search (for the most part). Web marketers are missing huge opportunities by mistaking mobile search for an evolution from desktop search. Mobile leaches relatively little search activity from the desktop. We were never doing in-store product searches on our desktops. And it was a rare individual who was checking his laptop in the car to find out where local stores were or how the traffic up ahead might be congested. Users matter more than contentAll three members of the Searchable Web Ecosystem (Publishers, Indexers, and Searchers) are equally important. It has always been that way. It always will be that way.Its incredibly naive and ignorant to suggest that users matter more than content. There is no competition between users and content. They have a relationship that cannot be managed by making one more important than the other. People keep attacking the content is king phrase like its some jargon that has outlived its time. Its obvious that anyone who denigrates content is king doesnt know where the phrase came from or what it means (read Bill Gates original essay here). There is no money without content. Period. For Web marketing its all about connecting the right users with the right content, and neither is more important than the other. For perspective, ask yourself which is more important: the customer in the grocery store or the food being sold to the customer. And then stop saying stupid things like Content is no longer king. Technical SEO includes Semantic SearchTechnical SEO is only about delivering the content to the search engine. Nothing more.We should do a series of roundup articles to learn what people think the phrase semantic search is supposed to mean (in their own words). Weve had semantic search for years. It has mostly failed to catch on. In fact, it has largely just failed. Why do marketers keep talking about semantic search as if its the next big thing? Search engines have moved on from the semantic model. They are more interested in contexts than mere meanings. When a search engineer says semantic [X] he is almost certainly talking about something other than what you think. This is one of the worst examples of non-technical people grabbing a technical expression and giving it a Frankensteinian life of its own. And at least half of you are thinking, But what about Googles RankBrain? If youre thinking that, ask yourself how RankBrain can be about semantic search if its just delivering canned results for new queries? There is no semantics in that search. RankBrain is essentially a short-cut to what are sometimes wrong results. Its just one tool among many, not the whole search system. Please, dont ever use any phrase starting with semantic again when discussing SEO. Rankings are based on links2002 called. It wants its link bomb spam back.Yes, link data is included among Bing and Googles mostly undisclosed set of signals used for filtering and ranking search results. Thats all you know so stop believing in the link fairy. One manual action stops all your anchor-text-passing links in their tracks. But there are other non-signal things that affect rankings, including: Query deserves freshness (a filter that uses time-sensitive signals)Page quality algorithms like Page Layout, Panda, and Payday LoansInferred local contextsUser search history and social connections (via Google Plus) Anyone who is aware of these things should know better than to say links are the most important signal or that its all about links. Content Marketing is (better than) SEOWhat most people call content marketing still sounds like a combination of link building and content spam to me.Real content marketing creates demand (and awareness) where none previously existed. If you are just producing content for people who already know who you are and are interested in your work, youre just publishing content. You are a Content Publisher (and you should be proud of the label). Call yourself a Digital Publisher if that makes you feel more techy and 21st century. If people could just agree on whether their mislabled content marketing was just about publishing content OR just about building links, I could live with the evolution in the phrases use. But I honestly dont know what you people are talking about half the time, and the other half of the time I have to read through your case studies just to see if youre content publishing or link building. Please, just call whatever you are doing what it really is and retire the phrase content marketing forever. The John Deere Company may not thank you but I will. NOTE: Personally, I think the phrases content publishing and content publisher are stupid and redundant, but it seems like we have to belabor the obvious to get people to wake up and realize that they are not fooling anyone. Growth hacking is better than SEOEvery growth hacking article I have read to date describes basic Web marketing practices.Apparently people liked the phrase growth hacking so much they decided they could use it to describe anything obscured by a popup push page. In order for growth hacking to be better than search engine optimization youll have to drop all mentions of keywords, content, and links from your growth hacking articles. And include something about how to grow Website traffic without content and links (because in SEO that is all you get to work with).Real data is better than theories.Real data by itself doesnt do much. As soon as you explain what you think your real data means, however, you are theorizing.Theory offers an explanation for the known facts. There is no guarantee the explanation is correct with respect to either the known facts or reality (and there is usually a gap between the two). People mostly hypothesize (make a guess about what will happen) rather than theorize (try to explain what happened) in Web marketing but theory is a part of basic human nature. You cannot escape it and you certainly arent rising above it. In the physical sciences there are long-running competitive traditions between theorists and experimentalists. You could draw a comparison between the two with, say, architects and construction managers. In Web marketing everyone is pretty much a combination of the two. You run your experiments and you try to figure out what the data means. Most SEO case studies are unscientific in nature but that doesnt make them completely useless. A lot of these case studies are produced by people trying to sell you something. Practicing scientists do kind of live on a grant-to-grant basis, or project-funding to project-funding. Their peer-reviewed analyses are supposed to just systematically document a lot of facts that, eventually (maybe), some theorists will weave together in a whole new concept that no one has previously thought of. Meanwhile, dont fear the theory. Its not undermining you any more today than it was last week. Its everywhere. People are spewing theory right and left. Learn to live with it. You dont have to agree with it. But youre not going to make it go away by claiming you have real data. Its very easy for a theorist to grab real data. The hard part is coming up with an explanation that is actually correct. Good luck to all of us with that. Using plugins is good for SEOSEO plugins do a lot of harm to Websites as soon as they are installed. The most common mistake SEO plugins make is to autmatically apply noindex to archive pages.If you dont know enough about search engine optimization to do it yourself then how are you supposed to choose and properly manage an SEO plugin that does it for you? Regrettably, I still see a lot of people asking about SEO plugins in online communities and inevitably people start recommending their favorites. I am sure there are people who have been doing SEO as long as I have (since 1998) who are convinced that all archive pages on WordPress blogs should be noindexed. I cant think of anyone whom I personally know who believes that, but I am sure these people must exist. After all, why would SEO plugin developers do something so stupid if there wasnt anyone with a lot of experience telling them to do that? I love the way modern SEO plugins give you granular control over their features. We use a couple of SEO plugins on our Websites. I spend a lot of time disabling default features but we do actually use the plugins on some content. So SEO plugins are good tools to have but you really need to learn how to do SEO first before you turn these tools loose. They are not going to pull up a white board and teach you proper SEO. And, frankly, given the assumptions they make, I wouldnt pay any attention to their lessons if they did. What is good for SEO is learning about the relationship between your Website and the search engines, and understanding what affects that relationship. What works for you may not work for someone else. Why? Well, that calls for gathering up some facts and then trying to explain them (a theory), for which I have no room in this article. You just need quality (content, links, blogs)Using the noun quality as an unqualified adjective makes you look like you just fell off the turnip truck.Yes, I understand that people who say quality BLONKO really mean high quality BLONKO but their laziness is a powerful sign that they are in over their heads. The fact they even speak in terms of quality at all shows they are grasping at straws. What, exactly, is quality? What is HIGH quality versus LOW quality? Please, spare me your examples because if you try to offer any on the basis of those two questions you have missed the whole point (again). Quality does not exist in a vacuum. There must be a context. Googles ideas of high quality dont always match my own. I have lost count of how many search results I have abandoned out of disgust because Googles high quality sites were irrelevant or, worse, pure crap. I give them credit for trying. With trillions of URLs to choose from, any set of algorithms is bound to go wrong once in a while. But you, dear Web marketers, keep talking about quality sites, quality links, and quality content. You provide no context, no metrics by which I or anyone else can judge quality as you do. Worse, you dont even provide any examples (neither did I in this article). Granting that we all want high quality whatever, the handful of metrics that have slipped out into the wild (thanks to well-funded marketing efforts) are really, really bad, LOW QUALITY metrics. Take [X] Rank/Authority, for example. Most of you would be appalled if I offered you a link from a low [X] Rank/Authority Website. Why? Because its low [X] value is not high enough for you. Thats not a low quality link in my book. Its just a low quality link in your book (and you wonder why YOU struggle with Penguin problems whereas *I* do not). I settle for a lot of low [X] links. I accept them gladly. I do not seek or pursue high [X] links unless someone pays me to go for them, and then I offer no guarantees that they will get those links. A high [X] link is little better than a low [X] link. If you spend enough time in the data and sifting through the theories youll eventually see that in a Web filled with trillions of links [X] never marks a very big spot. If I could get only 1 link for a new Website, yes, I would want it to be the best damn link possible. But I dont think in terms of how many links can I get and do they match someones idea of quality. I dont believe in your quality. You have never demonstrated that it actually means something. I truly, honestly wish you would all lose that word from your vocabularies. Its painful to see people talk about quality [WHATEVER] without explaining in clear detail what they think is quality (other than their own sites, which is usually what they seem to be talking about). I build traffic for Websites. I do it without fussing over quality BLONKO. Domain authorityBing and Google dont use it.Why on Earth are YOU still using it? The same goes for every other SEO metric that attempts to measure link or Web document quality. You might as well be assessing the horsepower of a Toyota Camry by driving a Volvo. Youll get the same quality data and your theories will be just as valid. ” - More SEO Myths for 2016 http://bit.ly/1UP4GXD via Tumblr http://ift.tt/23jx8pB "[Estimated read time: 6 minutes] A quantitative analysis of the claim that topics are more..."4/4/2016
“
[Estimated read time: 6 minutes] A quantitative analysis of the claim that topics are more important than keywords. Whats more important: topics or keywords? This has been a major discussion point in SEO recently, nowhere more so than here on the Moz blog. Rand has given two Whiteboard Fridays in the last two months, and Mozs new Related Topics feature in Moz Pro aims to help you to optimize your site for topics as well as keywords. The idea under discussion is that, since the Hummingbird algorithm update in 2013, Google is getting really good at understanding natural language. So much so, in fact, that it’s now able to identify similar terms, making it less important to worry about minor changes in the wording of your content in order to target specific keyword phrases. People are arguing that its more important to think about the concepts that Google will interpret, regardless of word choice. While I agree that this is the direction that we’re heading, I wanted to see how true this is now, in the present. So I designed an experiment. The experiment The question I wanted to answer was: Do searches within the same topic (but with different keyword phrases) give the same result? To this end, I put together 10 groups of 10 keywords each, with each groups keywords signifying (as closely as possible) the same concept. These keywords were selected in order to represent a range of http://bit.ly/1Yc2Xtt search volume, and across the spectrum of informational to transactional. For example, one group of keywords are all synonymous the phrase “cheapest flight times” (not-so-subtly lifted from Rands Whiteboard Friday): cheapest flight times cheapest time for flights cheapest times to fly cheap times for flights cheap times to fly fly at cheap times time of cheapest flights what time of day are flights cheapest what time of day to fly cheaply when are flights cheapest I put the sample of 100 keywords through a rank-tracking tool, and extracted the top ten organic results for each keyword. Then, for each keyword group, I measured two things. The similarity of each topics SERPs, by position. For example, if every keyword within a group has the same page ranking no. 2, that result will score 10. If 9 results are the same and one is different, nine results will get a score of 9, and the other will score 1. This score is then averaged across all 100 (10 results * 10 keywords) results within each topic. The highest possible score (every SERP identical) is 10, the lowest possible (every result different) is 1. The similarity of each topics SERPs, by all pages that rank (irrespective of position). As above, but scoring each keywords results by the number of other keywords that contain that result anywhere in the top 10 results. If a result appears in the top 10 for all keywords in a topic group, it scores a 10, even if the results in the other keywords SERPs are in different positions. Again, the score is averaged across all results in each topic, with 10 being the highest possible and 1 the lowest. Results The full analysis and results can be seen in this Google Sheet. This chart shows the results of the experiment for the 10 topic groups. The blue bars represent the SEO Expert by position score, averaged across each topic group, and the red bars show the average all pages score. The most striking thing about this is the wide range of results that can be seen. Topic group Ds keywords are 100% identical if you dont take ordering into account, whereas group J only has 38% crossover of results between keywords. We can see from this that targeting individual keywords is definitely not a thing of the past. For most of the topic groups, the pages that rank in the top 10 have little consistency across different wordings of the same concepts. From this we can assume that the primary thing making one page rank where another does not, is matching exact keywords. Why is there such variation? If we look into what factors might be affecting the varying similarities between the different topic groups, we could consider the following factors: Searcher intent: Informational (Know) vs Transactional (Do) topics. Topics with high competition levels. Searcher intent Although Googles categorisation of searches into do, know and go can be seen as a false trichotomy, it can still be useful as a simplistic model to classify searcher intent. All of the keyword groups I used can be classed as either informational or transactional. If we break up our topic groups in this way, we can see the following: As you can see, there’s no clear difference between the two types. In fact the highest and lowest groups (D and J) are both transactional. This means that we cant say based on this data, at least that theres any link between the search intent of a topic and whether you should focus on topics over keywords. Keyword Difficulty Another factor that could be correlated with similarity of SERPs is keyword difficulty. As measured by Mozs keyword difficulty tool, this is a proxy for how strong the sites that rank in a SERP are, based on their Page Authority and Domain Authority. My hypothesis here is that, for searches where there are a lot of well-established, high-DA sites ranking, there will be less variation between similar keywords. If this is the case, we would expect to see a positive correlation in the data. This is not borne out by the data. The higher the keyword difficulty is across the keywords in a topic group, the less similarity there is between SERPs within that topic group. This correlation is fairly weak (R2=0.28), so we cant draw any conclusions from this data. One other factor that could explain the lack of pattern in this result is that 100 keywords in 10 groups is a fairly small sample size, and is subject to variation in the selection of keywords to go into each group. It is impossible to perfectly control how “close” in definition the keywords in each group are. Also, it may just be the case that Google simply understands some concepts better than others. This would mean it can see some synonyms as being very closely related, whereas for others it’s still perplexed by the variations, so looks for specific words within the content of each page. Conclusion So does this mean that we should or shouldn’t ignore Rand when he tells us to forget about keywords and focus on topics? Somewhat unsatisfyingly, the answer is a strong “maybe.” While for some search topics there’s a lot of variation based on the exact wording of the keywords, for others we can see that Google understands what users mean when they search and sees variations as equivalent. The key takeaway from this? Both keywords and topics are important. You should still do keyword research. Keyword research is always going to be essential. But you should also consider the bigger picture, and as more tools that allow you to use natural language processing become available, take advantage of that to understand the overall topics you should write about, too. It may be a useful exercise to carry out this type of analysis within your own vertical, and see how well Google can tell apart the similar keywords you want to target. You can then use this to inform how exact your targeting should be. Let me know what you think, and if you have any questions, in the comments. ” - Are Keywords Really Dead? An Experiment http://bit.ly/1Yc2ZSg via Tumblr http://ift.tt/1Tx7pDQ |