If you want to protect your ideas and prevent them from being replicated or stolen, you're probably considering applying for a copyright. Luckily for you, the process is easy and doesn't even require tearing yourself away from the internet.
Step 1: Make sure your work is eligible.
Not just anything can be copyrighted, and a lot of people get confused about the difference between a copyright and a patent. Copyrights exist to protect created work and media, like books, music and software. Patents exist to protect inventions, processes and objects, like scientific devices or unique consumer products.
To find out if your work is eligible for a copyright, check out categories and examples on the government registration portal.
Step 2: Get the stuff you need.
Before you fill out the online application, disable your popup blocker as well as any third-party toolbars in your browser (the government website recommends you use Firefox). You should also make sure your method of payment is readily available, as there is up to a $55 processing fee ($35 for a single author who is the sole claimant in a single work). Since your work must be reviewed for copyright approval, you should either have a digital file at the ready to attach or have access to a printer for the shipping label if you're submitting a hard copy.
Step 3: Choose the right application type.
The two most common applications for copyright are the standard application and the single application. There are videos detailing the step-by-step process for each application, and these are the definitions the copyright office provides:
Standard: The standard application may be used to register most works, including a work by one author, a joint work, a work made for hire, a derivative work, a collective work or a compilation.
Single: The single application may be used to register one work (such as one poem, one song or one photograph) created by one individual.
Step 4: File.
Copyright.gov has great documentation if you run into any snags, but the filing process is straightforward. Start the process by signing up for an account and then follow the instructions provided. Keep in mind that it may take upward of three months to hear back on your copyright status. If you run into a real snag, you can call for help at 877-476-0778 between 8:30 a.m. and 5 p.m. EST, Monday through Friday.
Note: You can also use the traditional post office method, if you so choose – or you can file online but then submit a hard copy of your work through the mail. You can find copies of the paper forms to print out on Copyright.gov. The fee for a basic registration using the purely paper method will cost you $85.
In this industry, we often discuss marketing strategies and sales tactics as if investing enough in the right strategies can make any business successful. The reality is less optimistic. Nine out of 10 startups fail, and in many cases that is because the concept was flawed from the start.
Below are four warning signs that your e-commerce idea needs to be retooled in order to be successful in the long run. This isn’t to say that your brand is doomed to failure if you have already picked up and starting running with one of these ideas, but if these signs sound familiar, now would be a good time to start considering a pivot.
1. “E-commerce marketplace” is your brand’s only identity
The most successful e-commerce sites on the web are retailers and marketplaces like Amazon, eBay and Etsy. Naturally, anybody who wants to make it big selling products online has their eyes on these companies and is paying close attention to what they do.
But no modern e-commerce site will be successful if they merely try to replicate what these sites have done. We’re not saying that you can’t compete with and outperform these three industry giants in certain areas of the market, but merely being a marketplace for shoppers to find products isn’t how you are going to do that.
To find success, you need something to differentiate your brand from the big three. Remember that Amazon’s brand initially focussed on being the place to buy books, Ebay was the place to auction and bid, and Etsy was a place for one of a kind products made by creative craftspeople. These brands were successful because they had a starting point that differentiated them from “generic retailer #2387.”
For an idea of how important this is, consider the case of Susty. Susty makes environmentally friendly, responsibly made disposable tableware. But this wasn’t immediately clear from their original homepage. Changing the homepage to clarify this differentiating factor led to a 250 percent increase in conversions.
To be successful, a differentiating factor needs to be meaningful and significant. It can’t be, “We’re Amazon, but our site design is more visually appealing.” A lot of e-commerce sites think they can find success with an approach like that, and most of them end up barely eking out an existence or failing.
2. You have a boilerplate lead capturing strategy
The digital marketing industry is a very open space, with many firms and publications offering up great advice to the industry at large about how to capture leads. This is a great thing, and not inherently problematic.
Problems do start to show up, however, when generic advice isn’t tailored to your business. This is an especially big problem for e-commerce brands since most lead-capturing advice is published by bloggers who sell their own products, marketers for B2B companies, and of course marketing firms that are promoting themselves.
The lead capturing strategies that work for these businesses are often quite different from the ones that work for e-commerce sites.
You’re probably familiar with what these boilerplate lead capturing strategies look like. They are typically built around lead magnets such as e-books or whitepapers.
There’s nothing inherently wrong with these lead capturing strategies, and e-commerce brands certainly can put them to use. But this inbound marketing industry standard was really developed with bloggers and B2B brands in mind. While we’ve already pointed out that you shouldn’t copy Amazon, Ebay, or Etsy, the fact that none of them place much stake in a lead capturing strategy like this is telling.
Tactics built with the e-commerce industry in mind focus more on lead capture during checkout, coupons, and deals. For an example of this in action, consider this case study about Rad Power Bikes: After introducing a pop-up asking visitors to sign up for email updates about sales, promos, and new products, they were able to increase email captures by 302 percent.
To build your strategy from the ground up, you need to develop a strategy designed to capture various types of leads, each of them demanding a different approach. This includes everything from top of funnel leads with an interest in topics or subcultures related to your business, to consumers looking for a solution to their problems that your products could play a part in.
And, of course, capitalizing on previous buyers is paramount.
Don’t misunderstand. We encourage you to use as many tactics and strategic elements you come across as possible, as long as they are developed and honed with your brand and target audience in mind.
3. You have a generic selling strategy.
The generic selling strategy is one of landing pages, AdWords and conversion rate optimization tactics. While these are useful tools, they don’t form the basis of a strategy. As you develop your selling strategy, your brand, unique selling proposition, and target audience should factor into your approach. Take these points into consideration:
Are there selling strategies that wouldn’t be a good fit for your brand’s image? For example, paid ads for a tech-savvy brand, or publishing articles on industry websites when you are marketing yourself as fresh, fun, and youthful.
Are there strategies that play directly into your unique selling proposition? For example, an “environmentally conscious eBay” could sponsor environmental events and charities, start petitions, and publish guest editorials on environmental sites.
Are there specific places your target audiences spend time online? Specific interests they have that you can play into? Subcultural signifiers and symbols you can play with?
Asking questions like these will help you develop a selling strategy that goes above and beyond the generic, leading to improved results.
4. There's no personalization
Personalization is so built into the modern digital landscape that most consumers will be disappointed or uninterested if their shopping experience lacks it. Recommendations based on their buying history, in relation to the buying history of others, have become an industry standard. 77% of consumers told Forrester that they have chosen, recommended or paid more for brands that offer personalization, and 35 percent of Amazon’s sales are the result of their product recommendation engines.
Recommendation engines are available for most e-commerce platforms, such as Personalizer by Limespot for Shopify and BigCommerce, and Barilliance for Magento, Volusion, and others.
While personalization doesn’t necessarily need to be built in from the very beginning, it’s important to ensure that you don’t scale in a way that would make incorporating personalization more difficult in the future. In depth, user-specific analytics data should be collected from as early on in the process as possible. Even if you aren’t using this data to make personalized recommendations yet, it’s important that this data is available for later use.
The following data should be collected and associated with users as early on as possible:
Email lists subscribed to
Lead magnets downloaded
Using this data, you can identify trends that can later be used to personalize product recommendations.
Bear in mind that personalization isn’t just about product recommendations. Other possibilities include personalized special offers, calls to action that are specific to the user’s place in the funnel, shipping rates by location, localized currency and units, and preserving user filters during browsing.
Successful e-commerce businesses have a unique brand identity, a lead capturing strategy built around that brand, sales approaches tailored to their audiences, and a growth strategy that accounts for audience personalization.
If these elements are missing from your business idea, start thinking about ways to incorporate them, to carve out your space in the market, and to find long term success.
Now more than ever before, “big data” is a term that is widely used by businesses and consumers alike. Consumers have begun to better understand how their data is being used, but many fail to realize the hidden dangers in every day technology. From smart phones to smart TVs, location services and speech capabilities, often times user data is stored without your knowledge. Here are some of the most common yet hidden privacy dangers facing consumers today.
Geo-Location can be convenient, especially when you’re lost or need GPS services. However, many fail to realize that any information surrounding your location is stored, archived and sold to a third party who wants to use that information for a wide variety of reasons. For example, are you aware that data is often collected during your shopping experiences? A variety of stores will purchase location information to determine how long a customer browsed in a particular aisle, so that they can further market to those customers in the future to promote similar products. The information may seem harmless, but would you feel that same way if you saw a physical person following you around collecting the same information?
2. Social Media
Facebook, Google, Twitter and Instagram are all social media services that are provided to individuals for “free,” but have you ever wondered what the real cost might be? It is often said that if you don’t have to pay for the service, then you are the service. The hidden cost for utilizing these social media sites is the forfeit of personal information for them to sell and profit from. Some individuals might say they don’t mind because they have “nothing to hide,” but wouldn’t you be wary of publicly posting your login credentials not knowing who might have access? Giving these large organizations rights to your private messages, can be interpreted as pretty much the same thing.
Another less known fact about Facebook is that they can create “ghost profiles” using facial recognition for people who do not have an account, but appear in someone else’s photos. During the Dakota Pipeline Protests, Facebook sold the private chat messages of its users who were discussing the matter to the FBI and local police, as well as private security companies. The police didn’t need a warrant to obtain confidential information, they simply needed to buy it. This is just one of the many ways that social media affects those who don’t realize the implications.
3. Web Browsers and Apps
Web browsers on smart phones are referred as “sandboxed” in the cyber security industry, meaning they cannot access general data on the system or control hardware. However, an app can be coded to do anything, such as gather information. For example, the History Channel's mobile website prompts users to download the app and will limit or restrict the pages you can visit. In order to view more pages, you must download the app, giving up personal information in the process. After downloading, the app asks for permission to access the camera and the microphone on your device to gather additional information of its users.
4. Speech Software and Smart TVs
Speech software such as Cortana, Alexa and Siri have become increasingly popular in the past few years. However, if you are running these services in your home or office, you have an active listening device running at all times. Essentially, you are “bugged.” These services are running, tapping and sending your audio steams to remote servers daily. Many fail to realize that the cameras on these devices can be turned on without the light being activated. All of this can be done without downloading any related software because the software is already built-in. Additionally, if you live in the United States and own a smart TV, it’s likely monitoring what you watch and at what times.
5. Shopping and Savings Cards
Are these just great programs to help you save a little money at various stores? What is in it for the business offering these savings? There are some little-known privacy dangers inherent in the “frequent shopper” or savings cards offered by many grocery stores and retailers. These organizations are saving, analyzing and sharing information on what you buy and when you buy it to predict future sales.
The savings passed on to the consumer are far less than the amount of money these companies are making by selling the information to outside resources regarding your purchasing history and habits. Specifically, Kroger and Ingles make over 200 percent more profit from the data that they sell than the savings that the consumer experiences. The best way to protect oneself from the sharing of personal information is to limit the number of programs you participate in.
College isn't for everyone. While you might believe you need to earn at least a bachelor's degree to get a decent-paying job, that isn't the case. Sure, high degrees like MBAs or Ph.D.s can certainly set you apart from other candidates, but you can land a solid career without one.
If you're short on time or money, or simply aren't interested in continuing your education to get a degree, don't fret – there are plenty of other options that will find you success, from trade schools to onsite training. According to data from the Bureau of Labor Statistics, these are the top 25 highest-paying jobs you can get without a four-year degree – each with salaries of more than $70,000.
Yes, the idea of seeing into the future is exciting. But, AI, you had me at “detect.” Despite the appeal of prediction, business can get ample value just from improved detection. Prediction can follow later.
The Importance of Smoke-Detector Predictability
Prediction is more uncertain, difficult, and expensive than detection. And most businesses (to massively understate) are not so well-run that better detection and faster reaction alone aren’t hugely valuable. Although the value may be less for detection than prediction, the ROI can be greater.
Changing perspective from crystal balls to smoke detectors can help the serious data-driven manager in several ways:
Smoke detectors encourage action. Smoke detectors provide early signals of what is already happening, not what might happen. Smoke detectors don’t predict fires; they alert us immediately when there is one. We have existing processes for fire: escape or extinguish. Similarly, businesses can still benefit from detecting issues quickly, even if they are unpredicted. The opportunity to prevent may have passed, but managing existing business processes can start sooner. And just like extinguishing is easier when smoke detectors alert quickly, managers have more options for better outcomes if they have more time.
Smoke detectors make sense. We don’t know how crystal balls work. Most managers don’t know how AI works either. AI has hidden layers that make it difficult for managers to know why AI delivers a given prediction. “Knowing” an outcome without understanding the underlying reasoning makes it harder to trust the results. Blind acceptance is a romantic, not a rational, approach.
In contrast, smoke detectors make sense. Even without a detailed understanding, people know they need to be centrally located with access to airflow — no one would expect a smoke detector in a box in a closet to do much detecting. By similarly thinking about how AI could provide early detection of business issues, managers can then naturally think about the data that could inform the detection. Where are data signals currently missing? Where are data signals low quality? Where are data signals giving false alarms? These questions naturally lead to practical steps to improve the data that fans the flames of modern AI.
Smoke detectors are themselves predictable. As managers recommend investment in solutions that involve AI, the investments are based on expected value. Expected value includes not only the amount but also uncertainty. Prediction is likely more uncertain than detection. It will take a lot of extra value from prediction to offset that uncertainty.
Worse, measurement problems may doom investment targeted at knowing the future from the outset. On the basis of prediction of the future, proactive management will act to change that future. When the predicted event doesn’t happen, is it because the prediction was incorrect or because the managerial action was successful?
For example, instead of trying to predict which customers will churn, managers can shift to better detect which customers are dissatisfied. The implications may be similar, but changes in satisfaction are measurable while customers who were going to leave but didn’t are not.
Prediction might be more valuable, but at a greater cost and uncertainty that can result in lower ROI.
Smoke detectors fit many places. Most businesses have many issues spread throughout the organization that could benefit from earlier detection. While people most likely don’t need more than one crystal ball, smoke detectors are useful in many places. If managers think about specific instances where they would like earlier detection, they will most likely find many different uses.
Value From AI Today
Yes, AI-enabled prediction is a fascinating and useful longer-term organizational goal. Companies are making considerable progress in prediction in many areas. But while the lure of having a crystal ball is certainly appealing, prediction is inherently difficult and may yield a low ROI. As a smoke detector, AI can provide real value for organizations today.
How much can we expect business to lead on sustainability? What should be a company’s biggest priority: Serving its shareholders, providing jobs, or addressing the health of our planet?
Often, these goals are at odds. And yet as governments fail to adequately address climate change, if business doesn’t take the lead, who will? It’s a complicated topic with many strong points of view. So, we’re bringing together two leading voices in the sustainability debate to wrestle with the issues in what is sure be a lively conversation.
Please join MIT’s Yossi Sheffi, author Andrew Winston, and me Nov. 1 at 1 p.m. EDT on the MIT campus or via our live-stream.
In 1971, philosopher John Rawls proposed a thought experiment to understand the idea of fairness: the veil of ignorance. What if, he asked, we could erase our brains so we had no memory of who we were — our race, our income level, our profession, anything that may influence our opinion? Who would we protect, and who would we serve with our policies?
The veil of ignorance is a philosophical exercise for thinking about justice and society. But it can be applied to the burgeoning field of artificial intelligence (AI) as well. We laud AI outcomes as mathematical, programmatic, and perhaps, inherently better than emotion-laden human decisions. Can AI provide the veil of ignorance that would lead us to objective and ideal outcomes?
The answer so far has been disappointing. However objective we may intend our technology to be, it is ultimately influenced by the people who build it and the data that feeds it. Technologists do not define the objective functions behind AI independent of social context. Data is not objective, is it reflective of pre-existing social and cultural biases. In practice, AI can be a method of perpetuating bias, leading to unintended negative consequences and inequitable outcomes.
Today’s conversation about unintended consequences and fair outcomes is not new. Also in 1971, the U.S. Supreme Court established the notion of “disparate impact“ — the predominant legal theory used to review unintended discrimination. Specifically, the Griggs vs. Duke Power Company ruling stated that independent of intent, disparate and discriminatory outcomes for protected classes (in this case, with regard to hiring), were in violation of Title VII of the Civil Rights Act of 1964. Today, this ruling is widely used to evaluate hiring and housing decisions, and it is the legal basis for inquiry into the potential for AI discrimination. Specifically, it defines how to understand “unintended consequences“ and whether a decision process’s outcomes are fair. While regulation of AI is in early stages, fairness will be a key pillar of discerning adverse impact.
The field of AI ethics draws an interdisciplinary group of lawyers, philosophers, social scientists, programmers, and others. Influenced by this community, Accenture Applied Intelligence* has developed a fairness tool to understand and address bias in both the data and the algorithmic models that are at the core of AI systems.
How does the tool work?
Our tool measures disparate impact and corrects for predictive parity to achieve equal opportunity. The tool exposes potential disparate impact by investigating the data and model. The process integrates with the existing data science processes. Step 1 in the tool is used in the data investigation process. Step 2 and 3 occur after a model has been developed. In its current form, the fairness evaluation tool works for classification models, which are used, for example, to determine whether or not to grant a loan to an applicant. Classification models group people or items by similar characteristics. The tool helps a user determine whether this grouping occurs in an unfair manner, and provides methods of correction.
There are three steps to the tool:
The first part examines the data for the hidden influence of user-defined sensitive variables on other variables. The tool identifies and quantifies what impact each predictor variable has on the model’s output in order to identify which variables should be the focus of step 2 and 3. For example, a popular use of AI is in hiring and evaluating employees, but studies show that gender and race are related to salary and who is promoted. HR organizations could use the tool to ensure that variables like job roles and income are independent of peoples’ race and gender.
The second part of the tool investigates the distribution of model errors for the different classes of a sensitive variable. If there is a discernibly different pattern (visualized in the tool) of the error terms for men and women, this is an indication that the outcomes may be driven by gender. Our tool applies statistical distortion to fix the error term — that is, the error term becomes more homogeneous across the different groups. The degree of repair is determined by the user.
Finally, the tool examines the false positive rate across different groups and enforces a user-determined equal rate of false positives across all groups. False positives are one particular form of model error: instances where the model outcome said “yes” when the answer should have been “no.” For example, if a person was deemed a low credit risk, granted a loan, and then defaulted on that loan that would be a false positive. The model falsely predicted that the person had low credit risk.
In correcting for fairness, there may be a decline in the model’s accuracy, and the tool illustrates any change in accuracy that may result. Since the balance between accuracy and fairness is context-dependent, we rely on the user to determine the tradeoff. Depending on the context of the tool, it may be a higher priority to ensure equitable outcomes than to optimize accuracy.
One priority in developing this tool was to align with the agile innovation process competitive organizations use today. Therefore, our tool needed to be able to handle large amounts of data so it wouldn’t keep organizations from scaling proof-of-concept AI projects. It also needed to be easily understandable by the average user. And it needed to operate alongside existing data science workflows so the innovation process is not hindered.
Our tool does not simply dictate what is fair. Rather, it assesses and corrects bias within the parameters set by its users who ultimately need to define sensitive variables, error terms and false positive rates. Their decisions should be governed by an organization’s understanding of what we call Responsible AI — the basic principles that an organization will follow when implementing AI to build trust with its stakeholders, avert risks to their business, and contribute value to society.
The tool’s success depended not just on offering solutions to improve algorithms, but also on its ability to explain and understand the outcomes. It is meant to facilitate a larger conversation among data scientists and non-data scientists. By creating a tool that prioritizes human engagement over automation in human-machine collaboration, we aim to inspire the continuation of the fairness debate into actionable ethical practices in AI development.
* An early prototype of the fairness tool was developed at a data study group at the Alan Turing Institute. Accenture thanks the institute and the participating academics for their role.
You and about 20 of your coworkers are sitting around a crowded conference room table, discussing the details of some project. Some people are fighting for attention, trying to get a word in. Others won’t stop talking. Others have tuned the meeting out, retreating to their laptops or phones. At the end of the meeting, the only real outcome is the decision to schedule a follow-up meeting with a smaller group — a group that can actually make some decisions and execute on them.
Why does this happen? People hate to be excluded, so meeting organizers often invite anyone who might need to be involved to avoid hurt feelings. But the result is that most of the people in the meeting are just wasting time; some may literally not know why they’re there.
Whether it’s a meeting, an email thread, or a project team, people need to be excluded from time to time. Being selective frees people up to join more urgent engagements, get creative work done, and stay focused on their most important tasks. How, then, can leaders do this gracefully?
We recommend three steps.
Focus on key employees to protect them from overload. Most leaders try to pare down a meeting list or an email thread by looking for employees who clearly don’t need to be on it. But we suggest the opposite approach. Who is the valuable, collaborative employee you are most tempted to include? Now ask yourself: is she really necessary?
We pose this question because one of the foundational concepts to thoughtful exclusion is known as collaborative overload. The term was coined in a 2016 HBR cover story from leadership and psychology professors Rob Cross, Reb Rebele, and Adam Grant. Drawing on original research, they claimed that up to a third of collaborative efforts at work tend to come from just 3% to 5% of employees. These employees are often massively over-burdened and, in turn, at risk for burning out.
If the same small group of people get invited to every task force, every special project, every brainstorming meeting, there’s no way they can keep up with more valuable tasks. That’s why the first step to thoughtfully excluding people is to spot those employees at the greatest risk for collaborative overload, and then be incredibly selective about when to include them in meetings or other projects.
Address people’s natural social needs. The acts of excluding and being excluded are intensely emotional, even when people know they’re invited to too many meetings and resent getting too much email.
That’s because humans are social creatures; we naturally want to help those whom we consider close to us. The employees who suffer from collaborative overload take on such heavy burdens in part because they are compelled by these ancient impulses. It’s the same reason leaders over-include: They want others to feel like they belong.
The kind of exclusion that doesn’t trigger backlash or stymie productivity must address people’s varying social needs. If we look at who suffers from collaborative overload the most, we end up with two groups: employees who are too busy to be included in everything and employees who believe being over-included is a sign of prestige and status.
It’s up to leaders, therefore, to identify both groups and show them their time is better spent on projects with the highest return. Sample language might be variations on:
“I know you’ve got a lot of important work on your agenda, and I’d like to keep you off of this upcoming project so that you can focus on what you’ve already got. What do you think?”
“I’d like to take you off of this project, because someone else has a similar point of view. At the same time, you’d be able to add a ton of value to this other project because you bring a unique perspective. Would you be open to that?”
“I noticed that a couple of deadlines have slipped recently and that’s pretty unusual for you. Are there meetings, projects, or other things on your calendar that are consuming time or energy, that we might be able to reallocate? We all have times where we need some breathing room. How can I help?”
When leaders approach exclusion with employees’ social brains in mind, they can be more thoughtful in how they frame their directive.
Set clear expectations. Exclusion only hurts when people expect to be included.
The neuroscience of expectations shows there’s a great cost to mismatched expectations. When the anterior cingulate cortex, a brain region heavily involved in expectation matching and processing social exclusion, detects an error, it kickstarts a process that drains huge amounts of cognitive energy. This happens every time we encounter something unexpected, like seeing a favorite restaurant closed or getting disinvited to a meeting we’d normally join. That’s because the brain wants to make sense of the situation; it expected one thing and got another. Leaders eager to get the most out of their team members, by redirecting their efforts to more valuable activities, must understand and appreciate this aspect of the brain’s behavior.
If you only need a small subset of people attending a meeting, communicate with the rest of the group to ensure each person understands why they are not needed. Laying this groundwork also helps mitigate what psychologists call “social threat.” Just as loud noises and scary images can feel physically threatening, humans are wired to avoid threats in social situations, whether it’s anxiety, uncertainty, or isolation.
Managing people’s expectations ahead of time can act as a buffer against people feeling these kinds of social threats. For instance, the brain craves certainty, and being explicit about meeting participants’ roles offers it. Most of us also crave fairness, which you can provide by being transparent about the reasons for someone’s exclusion. That way, people can be excluded without the sting of feeling excluded.
Thoughtful exclusion in action
Leaders are responsible for appreciating these fundamental, albeit fragile, nuances of perception. When the time comes to launch a new project or host a big meeting, they should make it perfectly clear who needs to be involved, who doesn’t, and the reasons why. This way, employees will better understand how their role fits into the team’s larger mission, and with knowledge of other people’s roles, they’ll know who is working on what.
Think back to that chaotic meeting with 20 people. Thoughtful exclusion pares down that meeting to a core team of six or seven. Since the project manager now thinks hard about whose skills and time are most valuable — and whose would be better served elsewhere — she graciously decides you (and a dozen other people) have more important things to work on. As a result, the project reaches the finish line earlier and those employees who were excluded make greater progress on their own work.
Scale that behavior throughout an organization, and you have more people making better use of their time, tackling projects where their contributions are known, not assumed, to add value.
Exclusion may earn a bad rap in a climate where leaders are admirably sensitive about others’ sense of belonging. And it’s important to remember that thoughtful exclusion is only possible with an appreciation of the benefits of diverse perspectives and inclusive decision-making. But in order to avoid the dreaded logjam of over-inclusion, the brain science makes it clear that, with the right approach, thoughtfully leaving people out could become one of the greatest managerial moves a leader makes.
Youngme Moon,Mihir Desai, and Felix Oberholzer-Gee discuss whether the “retailpocalypse” is real, try to figure out how companies are spending their Trump tax cuts, debate whether share buybacks are a good thing or a bad thing, and offer their picks for the week.
HBR Presents is a network of podcasts curated by HBR editors, bringing you the best business ideas from the leading minds in management. The views and opinions expressed are solely those of the authors and do not necessarily reflect the official policy or position of Harvard Business Review or its affiliates.