The assumption also seems to be that techies do not have fuzzy interests or skills. That, clearly, isa stereotype that doesn’t always hold true. My techie kid also loved his philosophy class. Kids that study and work in tech can also be interested in literature, art and have good communication and inter-personal skills.
But also, the assumption fuzzies lack tech savvy.
Both sides can meet. And fulfill corporate needs. Not stereotyping either side.
@mom2and, the stereotype exists because it is often true but it is a stereotype because it does not apply to everyone.
There are lots of folks for whom the stereotype doesn’t apply. I live in both worlds as someone with very serious technical training working in a “fuzzy” field. ShawSon, who is a stereotypical nerd in some ways (loves board games, D&D, etc.) studied behavioral econ and math in college, loved Moot Court competitions in HS, won the prize for the best piece in his HS’s senior art show, wrote a young adult novel, was fascinated by constitutional law, was on his college’s debate team, started a tech company as a college senior, later went to get an MS in Computational and Mathematical Engineering and an MBA, and started another tech company while he was finishing grad school. If one buys the OP’s argument, those folks with technical training and “fuzzier” interests/soft skills are probably advantaged in the brave new world relative to others. ShawSon explicitly used the MBA to focus on soft skills – given his math background, most finance courses were fairly trivial – so he focused on how to motivate and lead people and the impressions he was giving people.
I’ve often been surprised that folks in the art world (which ShawWife inhabits) can’t imagine that folks outside of the arts are creative. As someone who studied a lot of math, I think creating new math or physics is almost the ultimate in creativity. Some of my work takes ideas from several fields and develops them in another feels (and is regarded as) very creative – I am creating new ways of thinking about an old problem.
I think it was CP Snow who wrote about the chasm between the literary and scientific cultures. Folks in the tech world often have non-tech interests, but many of the “fuzzies” in the past proudly proclaimed their inability to do math or technical things (think most of the lawyers I’ve dealt with as an example). I think the tide may be turning there, but I think in tomorrow’s world it will be a mistake to celebrate one’s technical incompetence.
@suzyQ7, I agree that LACs that allow kids to go forth without any technical skills are doing them a disservice.
Being a successful entrepreneur involves identifying a problem that people are now having (whether or not they recognize it) and creating a product or service that helps them solve the problem better and more cost-effectively than they do now. Sometimes, it involves wading into blue ocean, where there really is no product or service being offered. Other times it involves doing it better and cheaper than other solutions. Identifying those problems may require some empathy – you may have to imagine the problems that others are facing – although you may experience the problem yourself and say, “Someone should give me a better way to do this.” I think the author of the book cited by the OP thinks that liberal arts majors may be more empathetic on average and hence better at identifying those kinds of problems. Maybe.
@OHMomof2, how do you like your robo-advisor? Which one do you use?
DS21 is both techie and artistic, but not necessarily a people person. His preference is to work independently and not as part of a bigger team which will be his biggest challenge to overcome. His favourite past times are composing music using a DAW (digital audio workstation) and creating digital animation. He’s starting to think about what he wants to study after high school and we are trying to find some ideas that will allow him both to employ both his techie and creative sides. While it’s easy to say major in computer science and minor in music or visual arts the bigger challenge is finding an area of employment that is not precarious and is relatively well paying that integrates both of his interests. One potential area we’ve discovered is User Experience Design which is very much an intersection between visual design, psychology, marketing, programming and engineering. While AI could disrupt some of the areas that fall under the umbrella that is UX I find it hard to believe that it would be capable of assuming all of them.
@shawbridge Wealthfront. I deposited $500 (it has been an experiment for me). I think I had about $22 more than I started with when i last checked. But not going to look today
It is not limited to LACs that colleges may have H/SS general education requirements that require taking courses suitable for those majoring in the subjects, but allow science general education requirements to be fulfilled with “rocks for jocks” type of courses.
@shawbridge The whole robo advisor conversation is perfect for this thread. Robos can provide basic asset allocation. However, my greater point is they don’t provide planning. asset allocation / money management isn’t planning. It’s a very small subset of the financial advisory world. The bigger issues are should you even have your money in the market, and if so, how much? And what about tax implications and time horizons and risk tolerance and estate planning and elder care planning, etc? Anyone, or any robo, can figure out some basic asset allocation models. The humans are needed to advise on creating tax efficient income streams in retirement while paying for long term care protection and making sure that special needs kid is always provided for. You get my point.
@rickle1, I mostly disagree with you. Twenty years ago, people would have said robo advisors wouldn’t work doing what they are now doing. Over time, autonation will be able to draw on and replace more and more of human judgment. It is already the case that drawing on huge data sets, neural nets produces better outcomes than doctors on a number of medical diagnoses (See for example: cancer diagnoses (https://thestack.com/big-data/2018/06/19/google-ai-predicted-cancer-diagnosis-better-than-doctors/) and pneumonia (https://www.fastcompany.com/90152230/new-ai-can-diagnose-pneumonia-better-than-doctors). Over time, with more data and better technology, AI will expand the scope of judgment-based decisions on which it is superior to humans.
In my observation, tech folks overestimate the speed of change but actually underestimate the scope of the change when it arrives. The right approach is to start slow and learn on simple domains and then let the learning address more complicated domains.
Where I agree with you is that it may be hard to beat human judgment in situations in which there are so many variables that one can’t get a sufficient number of observations to train the models on. Then, a neural net probably doesn’t have the data to do better than human reasoning (even our somewhat flawed reasoning).
^ When you enter the counseling world (really any type of counseling), you leave the world of logic. This is where I think AI will struggle. In many cases it’s the difference between wants and needs. When a robo advisor tells me what I should do based on certain data points is not really relevant when I may not know (most likely don’t) what questions to ask, what bear traps to avoid, etc. Also, what I want and need are likely two very different things.
I’m sure AI can be refined to add many layers into the decision tree but I can’t see how it will know how to use the information provided and not provided to advise accordingly. Let’s use the tax return / CPA scenario already explored.
I’m a business owner. I meet with my CPA to discuss forward tax planning each yr. They want to know what my future plans are and when I don’t have answers, they ask a lot of pointed questions. This leads to some basic things like should you start a SEP, a Defined Benefit Plan, etc. How would AI process the fact that most don’t even know what those are? How would the client become educated? How would AI know that beyond education, I don’t want to start a qualified plan because I don’t want to have to fund employee benefits? How would AI then redirect me to a bunch of non-qualified plans when it doesn’t know I was considering any of that? How would AI lay out a comparison of alternatives that we didn’t even discuss (because as a professional, the CPA thought of some interesting scenarios he thought I might like). That is the world of planning and advising. It’s not as simple as setting up asset allocation models for type A , B, C, etc.
Yes I’m sure many will say it can all be programmed. Perhaps it can. But the human element is not binary so the decision tree, at least today, is not very helpful. I think there will always be a need for humans to guide or counsel other humans regardless of the industry. AI equals commoditization and scale. Many basic things will be accessed that way (say if you want a size 10 converse sneaker or a new Tesla) but humans will be involved with the deeper levels of guidance.
Most small business tax business planning could be reduced to a flow chart, which is very easy to program. Now, you would still need to input your information, like you have to input your information into turbo tax. The AI could ask you the same questions your CPA asks you.
@rickle1, I wonder if you are shooting at a dated strawman. Neural nets/machine learning are really pattern recognition, which at some level is similar to what experienced people do when giving advice in complex situations.
The problem you are focusing on is “What if the real problem is more complex than the problem the AI is focused on solving?” Well, yes indeed, that is a problem. It’s kind of like asking about autonomous driving where we teaching the AI only to use the steering wheel and take in visual stimulus but don’t mention the brakes or acceleration. Then you say “The autonomous driver systems are steering really well, but they don’t know how to brake.” We will have been working on the wrong problem. Can we stipulate that we address the right problem?
Assuming that stipulation, what one would need to do is to sit with advisors and go through many client cases to understand which information matters to solve the problem we’ve defined? If your data set includes many, many such examples with all of the information that the CPA thought to ask about, then the algorithm would produce results that respond to the actual problem and would match or over time do better than what one experienced CPA could provide.
I do tend to agree with @roethlisburger that the problem of small business tax planning for many small business is not all that complex.
I’m actually doing some work in this arena now (working with techies to develop AI to advise people in an area where they either get no advice or don’t have good advisors). I’ll let you know how it is going.
I guess I’m not explaining myself well. As an advisor, I frequently find myself bringing up issues that the client has never considered, is not educated on and knows nothing about. Small business owner tax planning is just a “choosing from a hat” topic (although there are hundreds of topic, sub topics related to small business tax planning) Or they come in for guidance on X and , through discovery, leave with guidance on Y (investor who wants to focus on asset mix vs. moving to a conversation about protection based planning - say LTC, without that who cares what you’re invested in as it all goes to the nursing home…)
I suppose you’ll have an answer for all these what ifs. Maybe there is an answer for all these what ifs. I just know that real planning is about multifaceted conversations that spin in many directions and it’s the role of the advisor to uncover situations, solve problems, prioritize solutions and implement change (and revisit to stay current).
Advising and counseling (financial or otherwise*) seems to be a difficult market in terms of sorting quality. Customers are often naive about the subject, which means that they may not make choices of advisors and counselors that are significantly better than random selection. Enough customers may be getting bad ones that some may prefer a more tightly bounded quality outcome from a robo-advisor than trying to find a better human advisor while risking finding a poor one.
*Physician and dentist are other services where advising and counseling (about medical and dental options) are included in the service, though they also include other services that are less suitable for robo-physician and robo-dentist.
@rickle1, I believe I understand what you are saying. Advising small business owners generally is a complex problem in which the client doesn’t know the right questions to ask or have knowledge of what he or she needs or what he or she is doing that may be problematic from legal, tax or other standpoints. Only by with probing questions and a lot of experience can you in effect define the problem that you then need to help the client solve. When I worked for a family office on Wall Street, I worked with fabulous tax people (the top people from one of the big accounting firms). Most of the tax/accounting folks I have dealt with tell me what I can’t do and not the best ways to do what I want to do. So, when I left that job and set up on my own, I asked around for someone who was really good at a) starting with the problem I wanted to solve and telling me how to do it (including giving me more and less risky ways to address the problem); and b) coming to me with suggestions for ways of doing things better. I interviewed a number of people before finding an excellent one. The basic structure he set up has probably saved me $10K to $20K per year since 1992. In addition, he approached me with a way for me to fund pretax $200K of savings per year. When we’ve discovered problems over the years, he’s found ways to solve them. So, I am deeply appreciative of the quality of the advice a good tax advisor can provide.
What I am saying is that in principle, the same kind of pattern recognition that he uses to give me advice or that you use with your clients can over time be trained – whether you are doing it by the apprentice model with a junior accountant or with a neural net. However, with any AI system, you need to provide lots and lots of data with all of the themes and variations that you and other good advisors have uncovered over the years. Today, that would be very hard to do. I believe that you can do it for simpler problems now, but conceptually what works now will probably be able later to take on much, much more complex problems.
And, I agree with @ucbalumnus that there are many mediocre advisors and that prospective clients are frequently not capable of identifying who is good and who is not (or who is better and who is worse) precisely for the reasons you elucidate: they don’t even know what questions to ask. So, many might be better served by a robo-advisor than by a random selection from the distribution of advisors. The robo-advisor will likely not be as good as the best advisors but could be better than the median advisor.
It is also the case that people tend to overestimate their capability relative to others. Doctors won’t use the decision-support tools that data shows clearly outperform doctors, because they assume their judgment is better than the decision-support tool.
Incidentally, here’s a note from Nasdaq to financial advisers urging them to get behind AI and figure out how to supplement. https://www.nasdaq.com/article/financial-advisors-are-you-ready-to-embrace-artificial-intelligence-cm998275.
People who ignore the upcoming tech disruptions do it at their own peril. These disruptions are much more than automation of certain tasks. There could be a whole new paradigm. Major Wall Street banks and hedge funds, etc. are all hiring increasingly large number of tech talents. They don’t want to be disintermediated by someone else’s technologies. They understood their survival could be at stake if they fail to stay ahead of the curve.
It’s an interesting conversation. We’ll have to wait and see what it means for the counseling side of most businesses. I agree that the product side can easily replace humans with fulfillment and distribution centers. Will there even be things like car dealerships? Certainly not using the same business model.
I think one point of the article was not that some “fuzzy” skills jobs won’t go away with AI but rather that people with “fuzzy” skills (social scientists, etc) are necessary to develop AI. They have studied how people and societies interact, thus are best able to know what the AI needs to consider…?
^ Thanks for getting us back on track. It’s not whether AI is coming, it is. It’s about what skills will be needed in the workplace of tomorrow. I feel being well rounded will serve one well. A tech guy/gal with great leadership skills and a fine fuzzy (communicator, writer, analyst type) with tech skills will both be quite valuable. I think you need both. The pure coders will become quite commoditized (already getting that way) as you can higher those skills easily (including off sure). The pure fuzzies will need the tech basics to be in the game.
The examples they give seem either contrived or niche. If you have robots in a factory or warehouse, you want them operating as efficiently as possible, not doing ballerina movements while it builds your iPhone or car or ships your Amazon order. If you have a robo financial planner, tax preparer, insurance agent, or legal adviser, all your interactions will be virtual. As for one of the other examples, you don’t need a degree in psychology to figure out what someone means when they say: “I need to look good, I’m going to my ex-boyfriend’s wedding. Oh, wow, that’s — there’s a lot of meaning in that.”
The central question is not whether fuzzies or techies are going to dominate the world in the future, but whether only a few techies and fuzzies at the top of their game will dominate. The world is going to be much more divided, in all sorts of measures, than it is already today. If we think we have serious problems in our world today, we haven’t seen nothing yet.