[PDF] Robo-Advising: Enhancing Investment with Inverse Optimization and Deep Reinforcement Learning | Semantic Scholar (2024)

Figures and Tables from this paper

  • table 1
  • figure 1
  • table 2
  • figure 2
  • figure 3

Topics

Expected Return (opens in a new tab)Inverse Optimization (opens in a new tab)Robo-advising (opens in a new tab)Machine Learning (opens in a new tab)

Ask This Paper

BETA

AI-Powered

Our system tries to constrain to information found in this paper. Results quality may vary. Learn more about how we generate these answers.

Feedback?

7 Citations

Intelligent Systematic Investment Agent: an ensemble of deep learning and evolutionary strategies

This paper proposes a new approach for developing long-term investment strategies using an ensemble of evolutionary algorithms and a deep learning model by taking a series of short-term purchase decisions, and provides empirical evidence of superior performance using the ensemble approach.

Learning risk preferences from investment portfolios using inverse optimization
    Shih-Ti YuHaoran WangChaosheng Dong

    Economics, Computer Science

    Research in International Business and Finance

  • 2023
  • 8
  • PDF
Recent advances in reinforcement learning in finance
    B. HamblyRenyuan XuHuining Yang

    Computer Science, Business

    SSRN Electronic Journal

  • 2021

This survey paper aims to review the recent developments and use of RL approaches in finance, including optimal execution, portfolio optimization, option pricing and hedging, market making, smart order routing, and robo‐advising.

Local Differential Privacy for Regret Minimization in Reinforcement Learning
    Evrard GarcelonVianney PerchetCiara Pike-BurkeMatteo Pirotta

    Computer Science

    NeurIPS

  • 2021

A lower bound for regret minimization in finite-horizon MDPs with LDP guarantees is established which shows that guaranteeing privacy has a multiplicative effect on the regret.

Privacy Amplification via Shuffling for Linear Contextual Bandits
    Evrard GarcelonKamalika ChaudhuriVianney PerchetMatteo Pirotta

    Computer Science

    ALT

  • 2022

This work considers the shuffle model of privacy and shows that it is possible to achieve a privacy/utility trade-off between JDP and LDP and presents an algorithm with regret bound, while guaranteeing both central and local privacy.

Bridging The Gap between Local and Joint Differential Privacy in RL
    Evrard GarcelonVianney PerchetCiara Pike-BurkeMatteo Pirotta

    Computer Science, Mathematics

  • 2021

By leveraging shuffling techniques, this paper presents an algorithm that, depending on the provided parameter, is able to attain any privacy/utility value in between the pure JDP and LDP guarantee.

  • PDF
Determinants of conventional and digital investment advisory decisions: a systematic literature review
    Fabian Wagner

    Economics, Business

    Financial Innovation

  • 2024

A systematic literature review and evaluated 97 publications on the determinants of conventional and digital investment advisory decisions found five main determinants were identified that are important for investment advisory decisions.

  • PDF

55 References

Learning to trade via direct reinforcement
    J. MoodyM. Saffell

    Computer Science

    IEEE Trans. Neural Networks

  • 2001

It is demonstrated how direct reinforcement can be used to optimize risk-adjusted investment returns (including the differential Sharpe ratio), while accounting for the effects of transaction costs.

  • 422
  • PDF
Deep hedging
    Hans BuehlerLukas GononJ. TeichmannBen Wood

    Business, Computer Science

    Quantitative Finance

  • 2019

This work presents a framework for hedging a portfolio of derivatives in the presence of market frictions such as transaction costs, liquidity constraints or risk limits using modern deep reinforcement machine learning methods and shows that the set of constrained trading strategies used by the algorithm is large enough to ε-approximate any optimal solution.

Learning Time Varying Risk Preferences from Investment Portfolios using Inverse Optimization with Applications on Mutual Funds
    S. YuYuxin ChenChaosheng Dong

    Economics, Business

    ArXiv

  • 2020

This paper presents a novel approach of measuring risk preference from existing portfolios using inverse optimization on the mean-variance portfolio allocation framework and allows the learner to continuously estimate real-time risk preferences using concurrent observed portfolios and market price data.

Large Scale Continuous-Time Mean-Variance Portfolio Allocation via Reinforcement Learning
    Haoran WangX. Zhou

    Computer Science, Mathematics

    SSRN Electronic Journal

  • 2019

This work devise a scalable and data-efficient RL algorithm and conduct large scale empirical tests using data from the S&P 500 stocks, finding that the method consistently achieves over 10% annualized returns and it outperforms econometric methods and the deep RL method by large margins.

  • 11
  • Highly Influential
  • [PDF]
Personalized Robo-Advising: Enhancing Investment through Client Interactions
    A. CapponiSveinn ÓlafssonT. Zariphopoulou

    Business, Computer Science

    Manag. Sci.

  • 2022

A novel framework in which a robo-advisor interacts with a client to solve an adaptive mean-variance portfolio optimization problem is introduced and it is argued that the optimal portfolio’s Sharpe ratio and return distribution improve if the robos counters the clients’ tendency to reduce market exposure during economic contractions when the market risk-return tradeoff is more favorable.

Continuous‐time mean–variance portfolio selection: A reinforcement learning framework
    Haoran WangX. Zhou

    Computer Science, Mathematics

    Mathematical Finance

  • 2020

This work proves that the optimal feedback policy for the continuous‐time mean‐variance portfolio selection with reinforcement learning must be Gaussian, with time‐decaying variance, and then proves a policy improvement theorem, based on which an implementable RL algorithm is devised.

  • 97
  • Highly Influential
  • PDF
Generalized Inverse Optimization through Online Learning
    Chaosheng DongYiran ChenBo Zeng

    Computer Science, Mathematics

    NeurIPS

  • 2018

This paper develops an online learning algorithm that uses an implicit update rule which can handle noisy data and proves that the algorithm converges at a rate of $\mathcal{O}(1/\sqrt{T})$ and is statistically consistent.

Dirichlet Policies for Reinforced Factor Portfolios
    Eric AndréGuillaume Coqueret

    Economics, Computer Science

  • 2020

Across a large range of implementation choices, this result indicates that RL-based portfolios are very close to the equally-weighted (1/N) allocation, which implies that the agent learns to be agnostic with regard to factors.

Robo Advisors: quantitative methods inside the robots
    M. BeketovKevin LehmannM. Wittke

    Business, Engineering

    Journal of Asset Management

  • 2018

It was shown that Modern Portfolio Theory remains the main framework used in RAs worldwide and it was revealed that the AuM volumes tend to be higher for the systems applying newer and more sophisticated methods.

  • 68
  • PDF
Continuous control with deep reinforcement learning
    T. LillicrapJonathan J. Hunt Daan Wierstra

    Computer Science

    ICLR

  • 2016

This work presents an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces, and demonstrates that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

...

...

Related Papers

Showing 1 through 3 of 0 Related Papers

    [PDF] Robo-Advising: Enhancing Investment with Inverse Optimization and Deep Reinforcement Learning | Semantic Scholar (2024)
    Top Articles
    Latest Posts
    Article information

    Author: Nicola Considine CPA

    Last Updated:

    Views: 6228

    Rating: 4.9 / 5 (69 voted)

    Reviews: 92% of readers found this page helpful

    Author information

    Name: Nicola Considine CPA

    Birthday: 1993-02-26

    Address: 3809 Clinton Inlet, East Aleisha, UT 46318-2392

    Phone: +2681424145499

    Job: Government Technician

    Hobby: Calligraphy, Lego building, Worldbuilding, Shooting, Bird watching, Shopping, Cooking

    Introduction: My name is Nicola Considine CPA, I am a determined, witty, powerful, brainy, open, smiling, proud person who loves writing and wants to share my knowledge and understanding with you.